But this is only true if the character ordinal values are less than 128.
or if you use an encoding that maps Unicode codepoints 0..255 to bytes 0x00..0xFF. I don't have a B4X environment on this computer, but I did at home do a sweep through all the available mappings, and there was one where a 256 character string of Chr(0) to Chr(255) would translate to a 256 byte array of 0x00..0xFF.
I must now change my thinking regarding buffer size in memory verses displayed characters on screen as a dynamic relationship depending on actual character content.
I believe in B4X it is still a fixed ratio between characters and bytes, but rather than the previous 1:1 it is now 3:1 or 4:1. Haven't tested it, but I doubt it is 2:1 because I have used some formatting characters that are > 16-bit, and Unicode conceptually began as a 32-bit code but has settled down into a 21-bit code (17 blocks of 16-bit ranges = a bit over a million available codepoints). It is possible that some programming environments store Unicode as 3-byte codes, but I'd guess that most would use 4-byte codes because that is a more natural size for binary computers.
Technically a BYTE value has no sign, it's just 8 bits
I would agree with this
but in some more esoteric edge cases, you might get some pushback on this - there are many and varied ways of interpreting bits, and base-2 is just one of them, albeit the most common.
A byte holding a value -128 to 127 would be a short int.
B4R would disagree with you on that. I feel like we are overlapping
byte (collection of 8 bits, aka octet) and
Byte (data type, interpretation of those bits).
B4X doesn't really support BYTE, WORD or DWORD types.
I certainly miss having the option of unsigned versions of them. And compatibility between B4R types and the rest of the B4X dialects. But that's the way it is, and unlikely to change, so... here we are