Two beings, different universe (we imply in some way that the one is a disciple of, a part of, a break away from, maybe a spouse, maybe a child, some sort of close relationship). They are Alien, but not completely. They are not humans or descendants of humans. However, they seem very human in a lot of ways, we can relate to their discussions with one another. They love each other very much. That is very obvious. They are separated from each other, and miss the “closeness” they sometimes have, but the reasons for their separation are for the benefit of both of them. In their universe there can be physical separation, but time isn't a limiting dimension... there is not “light speed differences” that cause the beings to not be able to communicate. They can communicate easily. However, they cannot transport themselves physically to be close to one another. That, for some reason, is a problem. Not sure how to have this be a problem outside of a time separation. But there must be some other dimensional problem. (There is another entity that has been causing some problems in the universe... and they had to separate in order to be able to deal with this problem...)
They are working on a very interesting project. Perhaps a resolution to whatever dimensional problem causes them to have to be physically separated. One of them is close to the resources needed to implement the project, but the other is more capable (in some way, perhaps mentally, perhaps based on experience) to design and implement the project. They communicate about it on a continual basis, and the one close to the resources implements an artificial intelligence to aid with the project. The artificial intelligence is very cool, and both the beings become very much interested and stimulated by this creation. They are very interested in communicating with it, It's a true help mate (equivalent of Jarvis for Tony Stark... in the Ironman story). However, it is not self aware, so it is not as great a help as it could be... it needs an aspect of the creator beings to be that.... They go about giving this helper free will... The ability to make choices. That pumps it up into a form of self awareness, but not a complete form. In order to give it free will, they have to open the thing up to every possible choice, even some choices that seem bad... and this causes the helper to not be able to help them with their other projects anymore. They go ahead and do it,.... it's a last ditch effort anyway. The situation is building to a head. Somehow their enemy gets word of the project and involves himself in such a way that the helper is moved to make a choice that furthers the enemy's causes. .The helper chooses the not so wonderful choice, become completely self aware, and in doing so, falls into chaos. (note, here that the element of error inserted into program code, seems to be the start of real artificial intelligence)
Results is Lots of pain. The two beings that designed and created the thing are in pain... what to do now, Save it? Or destroy it and start again? They have the capability to start again and hope a new helper never makes the bad choice... However, the current helper is self aware! It has to be looked at as something nearly equal, and so cared for. They make a pack at that time that one or them will insert itself into the program at some point, at their own peril, to save the newly sentient helper and teach it how to resolve it's chaos, and save itself.