Before the droplets are merged together, there are 1+1=2 water droplets; after they are merged, there is only 1 droplet. There is no contradiction, you just get a different result if you count the droplets after they merge than if you do so before they merge because the number of discrete droplets actually does change, though, presumably, you would get the same result if you weighed them either before or after because the amount of water doesn't change.
That's what I meant. There is no actual contradictions, of course, but you will be led to it if you understand the nature of addition (for example, if you confuse addition with the merging of droplets, then it isn't too far to conclude that the result of merging (1 droplet) is also the result of addition. This is an example from some real life internet people who honestly seem to think 1+1 can be 1 in such cases in the traditional decimal no. system)
I can say that angels sit on pinheads, but I can't count the angels on the head of a particular pin. I can count pins, sandbags, car accidents, and water droplets, but I can't count utility because nowhere in the world is there anything you can point to and say, "There! That is a portion of utility. And there! Another that we can add to the first." Qualitative experiences are "in here", not "out there" to be counted like coins or produce.
The expected hedonic value of my ABC example is a little over 4, but again, so what? Nobody experiences 4. The actual experiences are 10, -3, and 5.3289. That's it. 4 is nothing more than an empty abstraction. Making a decision that brings Alice up to 0 is good...for Alice. It makes no difference at all to Bob or Charlie.
I can sort of see where you are coming from. While it makes some sense, but still it isn't all totally clear to me. It seems that we have different intuitive starting points altogether.
First, since there are different variations of utility, let us consider a specific instantiation of utility - let's say expected hedonistic value - calculated with the formula:
summation i to n: P(person i)*hedon(person i)
Where P(person i) is the probability to have similar hedonistic experience as person i.
Sure, we can't exactly find this 'utility' anywhere in the word as anything concrete or even as any qualitative experience.
But now, let us compare it with some other statistics - like expected-death-by-cows-per-year (EDBCPY).
Where exactly is any EDBCPY? Can you find EDBCPY in your shoes? In a mall?
Now, you can say, but EDBCPY is a measure of (roughly average) deaths by cows - i.e concrete physical deaths that are caused by actual cows all of which are empirically observable.
But isn't an analogous case also true for expected hedonistic value (EHV)?
Doesn't EHV correspond to actual real life sufferings of people?
Now you can argue that EHV doesn't really correspond to any specific suffering of a particular person. It doesn't say how bad a person A is suffering or even how many people are suffering (though 'how many' can be answered by using a different utility formula). You can say that suffering of different people doesn't interact with each other and stuff.
But 'deaths-by-cows' don't need to interact with each either. They can be personal experiences too; and one person's death by cow doesn't have to do anything or say anything about another's. So what is the fundamental difference here?
Next you can argue as you are doing that deaths-by-cows are empirically observable features, that we can count; but sufferings are qualitative subjective experience.
But note that suffering is still a real experience; a real 'thing'. And it's not clear why it can't have any quantitative representation. We can use discrete quantification (counting 1 for each person suffering). We can use continuous real number representations corresponding to the intensity of suffering; in principle. But how do we measure qualitatively experienced intensity? That's more of a practical problem; I would distinguish it from the in-principle concerns of utilitarianism. But nevertheless, if our neural makeup can be found to be approximate indicators of hedonistic tones experienced by people - if it is found to correlated and consistent with other external indicators (expression, speech, body-language) and the experiencer's own testimony; then it can be used as a fair measurement.
It won't be perfect. We can have epistemic concerns related to knowing 'other minds'...but if we go that route, we never really have much epistemic certainty over anything.
If we can measure the intensity, we can easily add up the intensities to represent an expected hedonistic value - and that too from approximately empirical features (neural makeups) not too different from other statistical measures. Expected value measures can easily be used with subjective values anyway. There never was a theoretical restriction to begin with.
Now, you can argue that even if that's true EHV doesn't tell us anything about the world. But it does seem to clearly tell us something that is the expectation measure over a particular real life (from the 'world') samples of intensities of hedonistic tones.
That's what we directly measure.
But you can say, that it doesn't really measure anything particularly useful.
It doesn't tell us anything about any particular person's suffering or degree of suffering.
But it isn't supposed to. And just because it doesn't tell particularly about anything about any single instance of personal suffering, doesn't have to mean it is useless or totally abstract.
We can use it to measure the 'expected' value of suffering. An anti-natalist finding expected value of hedonestic pleasure is negative can argue why it's not a good idea to breed if we want to reduce suffering. Furthermore, suffering is tied to the world and its causes and conditions. If a statistical measure like EDBCPY is very high in city X compared to other cities, one can reasonably assume that there are some causes and conditions behind it - that there may be something abnormal. Perhaps it's the only city with cows, perhaps it has too many cows compared to men; perhaps it has a particularly aggressive breed of cows, perhaps it's about something in the food they feed to the cows; perhaps the issue lies in the cow education system - which may need some reinnovation - you get the idea. This can encourage us to look deeper into the issues. A tourist can then for example, learn to be extra careful around cows when visiting city X if he\she knows about EDBCPY value of city X.
Similarly, if EHV value is very low in city X, perhaps even in the negative, then again we can assume there are some causes and conditions. If from experiments with some samples and some city-wide surveys, the EHV values come up to be especially low, it should encourage us and people in power to do something, find the issues, causes and whatever to improve overall happiness. Widespread unhappiness may often be caused by more impersonal causes - perhaps related to political infrastructures, some environmental issue; perhaps the widespread death and destruction brought by the evil cows, or whatever. This can help a space alien immigrant to choose a better city to settle in. Similarly one can do surveys and stuffs, and do deeper investigations to find out if there are general causes for suffering; which it is likely to be. Furthermore, we can take ideas from places where EHV values are high. It does, then, seem to me to be having a number of potential uses and not too fundamentally different than EDBCPY. And of course, EHV would be central to the hedonistic utilitarian.
In general, you are posing several points related to utility as critiques but it's far from clear why those points, although accurate, are supposed to be exactly negative in the first place. Which is my point that we are starting from different intuitive framework here. And following your own admission to how there are so many utilitarians, I assume there are many others who don't share your intuitive framework.
Also, the way in which EHV varies with the dynamics of suffering among different people seems to correlate with our general intuitions about morality to at least some extent.
For example, in case of A, B, and C, if there experiences are 10, -3, and 5.3, then EHV will be 1/3 * 10 + 1/3*(-3) + 1/3*(5.3) = 4.1. It immediately tells us overall more people are likely to be happy, or if more people aren't happy, the happy people are quite a bit more happy than the sad people are sad. Of course how good of a value 4 is depends on the type of scale you use. Overall it does seem to say something quite relevant about the world, and something real about it (if the measurements are done through some reliable means).
Next intuitively, it would be in general a good action to reduce the suffering and/or increase the happiness of a person.
Let's say we reduce the suffering of B to 0. Then EHV also increases to 5.1. A net increase and thus according to hedonistic utilitarianism our action was good which correlates with the general intuition.
We may think it's best to improve the happiness of everyone - make everyone happy - that would increase EHV even more.
We may think it's bad to increase the suffering of someone. If we increase the suffering let's say turn A from 10 to 7, that will obviously decrease the EHV.
So a utilitarian can point out EHV can often 'vary' (the keyword here) in a way such that seemingly ethical actions tend to increase it, and seemingly unethical ones tend to decrease it. So one can still further argue that even if EHV in itself doesn't say anything specific enough, it's variance with respect to our actions does tell something important about the scale and the nature of the consequences of our actions.
The tricky part comes of course in cases of something like trolley problems. What if the dynamics between A, B, and C is such, that A's happiness depends on B's suffering. A will become twice as happy as B suffers. That is, increasing the suffering of B will increase the happiness of A even more.
This is the part where one's suffering can be 'counter-balanced' by another's pleasure. You may find this absurd, but again while that may be your intuition it's not clear to me why that is somehow obviously absurd. No one of course think that 'counter-balance' here is anything literal, that B's suffering is alleviated in any way by A's pleasure (there would be no dilemma if this was the case), nor do anyone think that there's some literal cosmic scale which gets balanced and satisfied if suffering counters pleasure or anything like that. People are aware of what is at stake. And no one is happy with actions like that even if some of them considers the action to be good.
Even going by some crude hedonistic utilitarian standard, sacrificing B for A's happiness is not the best course of action - a better course would be to maximize everyone's happiness of course, but it's relatively better than needlessly causing suffering to everyone. Again to this end, the variance EHV measure w.r.t our actions still seems to correlate with our intuition.
It seems then that sacrificing B is only an optimal choice when any better course of action is impossible or highly improbable to pull off.
This is similar to the mindset of "lesser evil for greater good". If suffering of a few can bring greater happiness to a lot of people, it's not immediately clear if that's a good thing to do or not. If happiness is supposed to be good, and the more the better, then it should be a good thing. Sure, this is where it can also get controversial and there can be intuitive clashes. But in that case, it would be too rash to jump to any side without any strong justification beyond intuitions of absurdity.
Still this particular scenario of increase the happiness of something already quite happy at the cost of further suffering of someone else, may seem especially revolting. But. to better fit our intuitions, we can always further refine the formula by highly weighing any negative values (or making a hard rule to prioritize reducing suffering always over increasing happiness of someone happy) i.e using some variation of negative utilitarianism, decreasing the rate of increment for someone already happy (can be justified by psychological principles) and so on. Let's not get into too much details about the engineering of the utility function right now; I am more interested in the in-principle issues.
You said, if we increase A's happiness it is 'only good' for A. But you are then personalizing good too much. You haven't yet given any arguments for or against moral relativism or moral anti-realism in general. Let's for the sake of argument consider that there are normative moral principles (moral realism is true). The question is if utilitarianism fits to be a normative moral principle - does it lead to some kind of absurdity? does it generally fit our intuitions well? Granted, they don't exactly tell if it fits to be normative principles, but intuitive plausibility still gets used as criteria for plausibility often times. Anyway, I am not here to argue for or against the normativeness of morality, and utilitarianism in particular; but my point is, if we are discussing about normative morality, let's focus on that. So if we are thinking about normative morality, morality should constitute universal principles, like principles of logic and maths. Something shouldn't be logical just for A, if something is logical, it is logical period. And so on. (let's also just assume there are universal epistemic norms related to maths, and logic - principles of non-contradiction and such - of course not necessarily existing in any ontologically extravagant sense). So from this perspective, if the action is good, it is just that good...not just for someone. What is the criteria for goodness of an action in this case, - if it maximizes utility. If A's happiness is increased without any cost by an action; the action is simply good by utilitarianistic definition; it's in the hedonistic sense good only to A (only A experiences positive increment) - but so what? The point is overall happiness in the world increases a bit, even if it is due to the increment of happiness of only one person - A it can still be considered as something good to be done. I didn't know morality is supposed to be about increasing the hedonistic value for everyone with every action. So what's the point in stating the obvious that only A gets the benefit here?
If both As and Bs states are improved, then there would likely be even higher EHV, it will be even better - so the higher utility - the higher goodness would correspond to higher number of people being happy or more no. of people being more happy than other people being sad (if we want to priortize reducing suffering over increasing happiness of the already happy, we can do that with negative utilitarianism). So EHV and its variance of values corresponds to the real life states of real life people.
So I don't really get your point here. Your points are correct, but obvious. We all know if we do good to A only, only A will experience the benefit, but utiitarianism doesn't say otherwise. If only A gets the benefit,
("Making a decision that brings Alice up to 0 is good...for Alice. It makes no difference at all to Bob or Charlie.")
so what exactly? It is still SOMEONE experiencing the benefit - the greater happiness increasing the net EHV - which tells us that our action does more good than harm. If A gets a bit happier everyone else remaining the same, it still seems intuitively plausible to say, that slightly a bit of good is done (even if the good is experienced by one person). And since the good is done only to one person the net increase should be also relatively little compared to if the same good was done to many people. So it still fits our intuition that if we help just one person, it's just helping only one person, only one person experiences the benefit of goodness, so as a result not a lot of variance is observed in net EHV.