Yes it is very computationally intensive for the poor little Cortex with limited memory and an aging ARM cortex chip in it.
I doubt you’d have enough sensors to manage the angle of those 22 degrees of freedom. Can you pare it down to a few?
If you are looking at Vex parts, each arm would be it’s own Cortex full of motors and sensors.
shoulder joint pitch (along the body)
shoulder joint roll (away from the body)
elbow joint pitch (I think there’s only 1 DOF there)
wrist yaw (side to side)
wrist tilt (up down)
wrist twist (turn)
Now you have run just about out of pot’s and IME’s to hook in to the cortex.
From the shoulder outwards infer the position of the hand via math. Get the velocities and accelerations of each joint. Figure out where the hand is as well as each point.
Then comes the path analysis of where you want to get the “hand” to. Like you said, there are constraints on humans for each of these joints you want to apply.
The pot to an angle is not a 1:1 measurement but you can get a fairly good formula to approximate it. From the angle get the angular velocity of each joint. Lengths of the arm segments gets you velocity vectors relative to its space. Go down the arm and you get the whole arm motion computed for where you are.
Now it’s decision on where you want to be and how much force is needed to get there. I don’t know the path that “normal human motion” would take for a hand to get from point A in orientation A to point B. Is it mainly lower arm based or upper arm based? That then can help you in the movement strategy you want to employ.
I made a robotic dragon a couple of years ago that had 12 DOF abs around 18 motors. I ended up using 2 cortexes. The code between the two is fairly easy if you have a master and a puppet cortex. One could control the arms, and the other could control the rest of the robot and send the arms general positions to go to our general functions to perform.
On the idea of the inverse kinematics, look at this site
It does a pretty good job at explaining some stuff.