I'm trying to get a better grip on the concept of morality (thus I hereby pose a question)
If we could make a machine that could create humans. May we use that humans?
Or if we could create an intelligence with a consciousness, may we use them?
Any opinions will do, don't be shy!
I'm going to pretend you're serious and just give a few replies for your consideration.
A Kantian might say that we cannot use those machine-created humans or those created intelligent, conscious beings if they can be ends in themselves. And, quite likely, they would be ends in themselves.
A utilitarian might say that they are due the same liberties as anyone else. If the machine-created humans were relatively ignorant and not self-aware, then the issue might be the degree and quality of pleasures, pains and preferences they could experience. Their experiences would be neither more nor less important than any others' similar kinds of experiences.
A natural law theorist might say that these beings are a violation of the purposes or order of things. However, a religious natural law theorist would probably suggest that a human being, however it was created, should be recognized as a gift of God and therefore given the same dignity as any other human being. A conscious, intelligent being of some other sort might, however, be rejected by them.
A contractarian like Hobbes, or Locke, or Rousseau would probably have to determine the place in society such an individual would play, and then determine whether and to what degree such a being could be a free agent capable of making choices about trading autonomy for benefits.
Bradley would probably want to know whether the being (human or otherwise) were capable of determining itself, capable of self-realization. If it were such a thing, then it would be treated as anyone else with those capacities and therefore not subject to being used by others.
There are at least a half-dozen other views, but, this seems enough for now.