Quite honestly, no, that is extremely unlikely. The reason is simple: for every feature added to TM, an evaluation has to be made that consists of a number of factors:
How important is the feature?
How much time will it take to implement?
How much time will it take to test that feature each time a release is made?
How many events will use the feature?
What is the risk that the new feature will introduce a bug?
What is the risk that someone will misuse the feature and cause an event to halt in the middle of the tournament?
Will this feature make TM more complicated, both from a user perspective and a code perspective?
Will this feature make TM harder to use (most people think TM is too complicated, and want it simplified)?
(To be clear: TM feature decisions are made by VEX and RECF, but the above questions are things that they would certainly consider for any new feature.)
Basically, what you find out as a software developer (or any product engineer, probably) after you’ve been at it a while is that the time and effort needed to initially implement a feature is almost irrelevant - it’s supporting that feature for the next 10 years that is hard.
If you look at the features you’re asking for in this thread and honestly answer those questions I wrote above, I think you’ll see that the cost/benefit tradeoff probably isn’t there. If this was like other robot competitions that have a few dozen events all season, then maybe it would make more sense. However, VEX is massive - there were over 100 events last weekend alone. The priority for TM has to be doing what we can to make life easier for those 100 EPs every weekend using it.
Completely agree. Now that “everybody codes” the criticism for not implementing features has grown much worse than it used to be. Because very few of those “everybody coders” do product lifecycle. In some projects I’ve worked on we implemented a support cost modeling tool. Seeing those numbers will stop you cold.
@Dave Flowerday, you have valid points and it makes perfect sense if you are responsible for a software system that needs to work reliably over long period of time and without mega budget.
On the other hand, lets not forget that the major goals of VEX and RECF include student education as well as fostering innovation (I don’t think they choose the name “Innovation First” by an accident).
One of the best ways to teach students is by setting example of how to innovate and push the boundaries and that’s exactly what @TheColdedge is doing.
So, is there a way to build the system described in OP without breaking TM functionality or putting extra burden on its developers?
A number of years ago I had to do a project with challenges that sound very similar to what this thread is discussing.
There was a piece of proprietary software running on the PC, that was controlling an equipment connected through the serial port. Neither source code nor any support was any longer available from the OEM. The software had a control interface that only run in DOS mode (it was that old). There were 4 of those sitting in different remote rooms and originally Norton pcAnywhere was used to dial-in and check status of each system. At some point the PCs were upgraded to Windows 2000 and connected to the network.
In addition to displaying status in UI, the control software was saving its state into the file, but if you try to read that file from another process it would occasionally crash. Most likely, when it needed to update the status in the file, it was trying to open it with something like shareDenyRead or shareExclusive option and, probably, didn’t even check for the return code if the call would fail, instead of retrying after some timeout.
So, if you try to open the status file from another process even for very short period of time and even with shareDenyNone option, there would still be risk of control software trying to write at the same time and crashing.
The workaround, that I eventually came up with, was to attach both processes to the same CPU (SetAffinity flag) and run my monitor process with higher priority than the DOS VM with the control software. Then, after sleeping for ~1 min, the monitor process would open status file as memory mapped and slurp it into the memory with one system call, then close it. After that the file content would be sent over the network to centralized dashboard.
I don’t remember if I had to put some sort of critical section calls around file reading operations or found some other guarantee that kernel wouldn’t switch to DOS VM as the monitor was reading the file. However, I am pretty sure, it wasn’t: “lets deploy it and hope it is not going to crash often”.
There got to be a way to duplicate database content in a way that is not going to jeopardize TM operation integrity and does not need continuous support from TM devs.
I just don’t believe that with so many experienced engineers in this thread we couldn’t come up with a solution that will make everyone happy both in short and long term.