I really enjoyed reading the metaphors on this thread.
I liken the cognitive functions to object-based programming. Intuitives working their perception closer to high-level language, and sensors working closer to the assembly / machine language.
Without any work on the perception, things are disorganized, sometimes repetitive, somewhat daunting. That's where intuitives start to abstract, and conceptualize. Organizing patterns into objects, and inheritance of those objects. Soon, what was of a 1900 line soup gets cleaned up into a nice brief table of contents, meta-grouped in three or four categories. Now, access to this data is quick and efficient.
But having just a conceptual idea does very little, as life does not come with a default compiler. That's where sensors come in. When dealing with gathering data or perception, the sensors start compiling things more into low-level assembly, often working right with the bits and bytes of things.
Bringing the analogy closer to understanding, I devise that the rationalizing functions work very similarly. Only now, we aren't working with gathering data, but utilizing, changing, transforming, and inventing which will eventually lead to an output.
Thinkers I'd say are much like the sensors but only in rationalizing data. When doing the transformations and processing, thinkers will work closer to the nuanced details of assembly. This would bring a more cognizant understanding of exactly how the information is being rationalized, and utilized.
The feelers are more like the intuitives. When it comes to judging and rationalizing, the meta abstractions collapse and conceptualize the ways to judge for easy and quick access. At the most highest level of abstraction, the table of contents may look somewhat similar to:
do good();
dont bad();
This would outline the difficulties when both types interact. For instance, sensors seem to go haphazard with getting info with no idea or plan on what they are doing. We would see them take the most unnecessary steps and redundant efforts, for something that would take us a mere 2 seconds to get the info for. However, we frustrate them as well, with our very bizarre 'up-in-the-clouds' concepts which have little to none actual 'data', or relevancy for when it comes time to actually perform a real-life situation. They'd really dare us to see our ideas in action.
As well with feelers, they seem to come up with output which they have no actual cognizance of the foundation in which it rests on. To us, their reasons why something is wrong, or good, or something to be enthusiastic for doesn't match the actual logic and laws which we base on reasoning. It's stressful for them to get deeper into the definitions of their high-level functions, as it strays farther away from sentience, and closer to something more mechanical, raw, and lifeless. Which that last sentence pretty well describes their frustration for thinkers.
~
There is one more dimension not covered, which is the vector of introversion and extroversion. It's probably debatable the way I describe it. I tend to see it as the difference of you, I and other as one whole unit, or each a separate individual as their own separate unit. What is the degree of connection between what you define as 'your self' and what is not yourself? Is that boundary bold and strong, or blurred and gradual?
At least that is okay for the dominant, leading function. In a general sense, I see extroversion as extrinsic and less personal, while introversion the opposite. With introversion, it's not only dealing with the 'personal' of just you, but also considering the other in which what makes them 'different'. Extroverted functions likes to play with similarities more, where everything shares similar properties and configuration. <- Which I suppose could also be conceptualized in a computer software context.