If I have a UInt16 called i, how would I get the most and least significant bytes (big endian) using division (not shifting as it needs to work on iOS)?
Var i As UInt16 = 500
Var msb, lsb As UInt8 // What's the math?
For extra credit, how would one do the reverse. That is, given a most significant and least significant byte, get the UInt16 value?
Var msb As UInt8 = 10
Var lsb As UInt8 = 67
Var i As UInt16 = BytesToUInt16(msb, lsb) // ?
dim mb as new memoryblock(2)
mb.littleendian = false
mb.Uint16Value(0) = whatever
dim low as Uint8 = mb.Uint8Value(1)
dim high as Unit8 = mn.Unit8Value(0)
writes and reads vs whatever processor instructions get used (probably integer FPU units) I expect they would be pretty close - maybe the integer divs would be slightly quicker
I thought I’d do a test to see which of the two methods is faster. Here’s the code:
Public Function UInt16ToBytesArithmetic(i16 As UInt16) as Pair
// Returns the passed 16-bit unsigned integer as a Pair (MSB : LSB).
// Uses arithmetic for the computation.
Var msb As UInt8 = Floor(i16 / 256)
Return msb : i16 - (msb * 256) // MSB : LSB
End Function
Public Function UInt16ToBytesMemoryBlock(i16 As UInt16) as Pair
// Returns the passed 16-bit unsigned integer as a Pair (MSB : LSB).
// Uses a MemoryBlock for the computation.
Var mb As New MemoryBlock(2)
mb.LittleEndian = False
mb.UInt16Value(0) = i16
Return mb.UInt8Value(0) : mb.UInt8Value(1) // MSB : LSB
End Function
// Put this test code in `App.Open()`:
Var startMB As Double = System.Microseconds
For i As Integer = 1 To 100000
Call UInt16ToBytesMemoryBlock(System.Random.LessThan(65536))
Next i
Var totalMB As Double = System.Microseconds - startMB
Var startArithmetic As Double = System.Microseconds
For i As Integer = 1 To 100000
Call UInt16ToBytesArithmetic(System.Random.LessThan(65536))
Next i
Var totalArithmetic As Double = System.Microseconds - startArithmetic
Break
make sure you do the test in a compiled app
debug mode is not always properly illustrative
edit : would be fun to poke at that in a decompiler to see what the compiled did end up being
Wonder if its shifts and other really simple instructions which should be very fast
You are recreating the memory block (allocating memory) with very call, which is not a real life test IMO. I would suggest trying how I would write something like that:
Public Function UInt16ToBytesMemoryBlock(i16 As UInt16) as Pair
// Returns the passed 16-bit unsigned integer as a Pair (MSB : LSB).
// Uses a MemoryBlock for the computation.
Static mb As New MemoryBlock(2)
mb.LittleEndian = False
mb.UInt16Value(0) = i16
Return mb.UInt8Value(0) : mb.UInt8Value(1) // MSB : LSB
End Function
Never one to be accused of favouritism against my friend @npalardy. I compiled the app on macOS with both the default settings and aggressive compiler settings.
Looks like the arithmetic approach is faster. For what it’s worth, even if the aggressive compiler settings managed to speed the MemoryBlock approach up to parity I would still choose the arithmetic approach since it took about 45 seconds to compile the test app on my iMac Pro and about 2 seconds to compile it on default settings. LLVM was definitely trying hard! The MemoryBlock approach is certainly more readable.
An excellent point. I implemented the change and it helps a little but not enough to beat the arithmetic approach.
On a related note, I literally only discovered Static variables today. They are awesome. I’ve been able to remove lots of Initialise() methods in my modules thanks to putting a static variable in a computed property.
there is ONE instance across ALL invocations of the method
in a module this may not be a bad thing
but in a class it might be since there is ONE static across ALL instances its
they are a lot like having a shared property which only exists once across all instances