I ran into a situation similar to this one: Faster TCPSockets but backwards.
I was developing some automation for a particular device (not with Xojo), but the person I was going to borrow the hardware from sold it (thanks Covid).
Rather than give up, I built a simulator in Xojo based on the protocol documentation. I have done this for a few other simple devices. This one is more involved, but now I could test the remote automation I was writing without the actual hardware. The protocol uses UDP so I used UDPSocket for communication.
On startup, the remote control takes a snapshot of the device settings (over 3200 values) in order to maintain a live indication of the status.
I was very disappointed to discover the throughput from my emulator was only 10-15 values per second. After inserting as may #pragma options as possible, I ended up with 25-35 per second. This still took almost 2 minutes to initialize.
I was starting to worry about the usability of my automation program, then I obtained access to a live unit. It is on another continent, but, Internet!
Using a teamware-like remote into the remote network, controlling a real device from there loads an average of 1300 values per second. 2-3 seconds to initialize is very acceptable.
Using a remote VPN (from US to Europe) averages 300 per second depending on internet traffic. This is usable for development without tying up a remote computer.
Based on the discussion in the other thread, I re-implemented the core socket using @MonkeybreadSoftwareUDPSocketMBS. That version averages around 300 per second.
Conclusion: Xojo UDPSocket is 10x slower than MBS, and 40x slower than a real device.
Yes the built-in UDP socket on Linux in Xojo is dog slow.
The UDPSocketMBS is much faster
For even better results with data transfer throughput consider using console app rather than GUI and use a
while my_UDPSocketMBS.AvailableBytes>0
My_datagramMBS=my_UDPSocketMBS.read(false)
//use that datagram here
my_UDPSocketMBS.poll
wend
Also for best transfer speed you will also need to increase ReceiveBufferSize and SendBufferSize to a higher value.
If you need good UDP data transfer, then the above will easily get you 20 MBs (160 megabits)+ sustained on even a netbook (assuming you have written a efficient sliding window to work the transfer and a binary class that can keep it fed using another helper process using MBS shared memory)
That is almost identical to what I have.
Adding poll increased throughput 2-3%.
There was not much difference increasing the standard buffer sizes past 2x.
I definitly need the GUI. Almost everything on this screen is interactive and sends any adjustments made to the remote, as well as displays any updates the remote asks for.
You could use a combination of console and GUI with shared memory in between as a low latency IPC. (separate Xojo linux processes for socket operations and for drawing the interface will also benefit from multiple cores on the machine and give the absolute best interactive user experience)
That gives the best of all worlds, but I think for the amount of data you get in a stream on a soundboard the GUI poll rate should be enough.
After the initial data sync, it works well. The 2 minute startup was annoying.
I can deal with 10 seconds.
I only use about 3500 of the 30000 data points actually available in the console.
For now, it pretends to be a console. After I finish with the other project, I may try inverting the conversation to see how well this would drive a real board.
I figured that out the hard way
The rest of the code does not seem to be able to handle the data any faster and adding extra polls just stacks/clogs the pipeline.