The latest and greatest sbbsexec.dll and dosxtrn.exe can be find in the nightly builds of Synchronet for Windows:
How did you determine the read/writes were "nonsense"?
Would happy to try address whatever issues with the UART emulation aren't working for you, but please update to the latest and get new/updated debug log output and share with me.
As I looked into what was going on I moved to lower and lower level diagnostic programs until I finally just wrote my own to know exactly what was being done on the program side.
Would happy to try address whatever issues with the UART emulation aren't working for you, but please update to the latest and get new/updated debug log output and share with me.
I had downloaded the latest SBBSEXEC.DLL the morning after you made the initialization change and have tried it out. It's working 100% along with the version downloaded today! Pushing the UART hard also no longer creates any errors or even any unusual debug log entries. Thanks again for fixing this.
There are a couple of other issues I would like to mention.
1. When SVDM uses an inherited socket (the -h option) no telnet negotiations are done. As a result, the connection is assumed to be in ASCII mode and server side CR characters are translated to CR/LF. Since most programs are already transmitting a CR/LF this gets translated to CR/LF/LF with the expected results. When using an external socket in telnet mode, could SVDM set the telnet.local_option and telnet.remote_option variables as so:
A. Assume both remote and local have already suppressed GA and set the two
options accordingly
B. Set the remote telnet echo option to off and set the local telnet echo
to follow the ServerEcho option from the .INI file
C. Set both remote and local BINARY_TX options to follow the ServerBinary option from the .INI file
I don't think it's unreasonable to assume these have already been set up when the telnet connection was initially made. If someone really wants to change the behavior they could still do so by using the .INI file options mentioned. The GA and echo options probably make no difference now but leaving them unset might cause trouble somewhere down the line.
2. Can anything be done to reduce the CPU usage?
3. The VDMODEM isn't importing target_ia32.props and thus is using SSE2 instructions.
Thanks yet again for all the work you've done on this and for fixing the issue I was having.
1. When SVDM uses an inherited socket (the -h option) no telnetI'll be committing a change here to address that - basically send the Telnet commands to re-negotiate those operating parameters (the same sequence that happens when answering an incoming Telnet connection).
negotiations are done.
I added 2 new .ini settings for you to play with:
- MainLoopDelay (default: 0, set to 1+ to add CPU yield)
- SocketSelectTimeout (default: 0, set to 1+ to add CPU yield)
Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
By: Digital Man to Fzf on Mon Mar 25 2024 04:27 pm
1. When SVDM uses an inherited socket (the -h option) no telnetI'll be committing a change here to address that - basically send the Telnet commands to re-negotiate those operating parameters (the same sequence that happens when answering an incoming Telnet connection).
negotiations are done.
It addresses the local configuration but unfortunately it still doesn't set remote options. The remote is usually going to be in binary mode but SVDM has the remote option set to ASCII by default. A CR from the remote then gets held up until a second byte is sent.
Sending a DO TX_BINARY near the WILL TX_BINARY when in ServerBinary mode and sending a DONT TX_BINARY when not in ServerBinary but using an external socket sets the remote options to appropriately match what SVDM is expecting. Clients might not like having their TX binary mode turned off mid session, but if someone is disabling binary mode on the server side they are already doing something weird.
It also sets the remote to binary when SVDM answers in listen mode. At the moment it leaves the remote TX in ASCII at all times.
I added 2 new .ini settings for you to play with:
- MainLoopDelay (default: 0, set to 1+ to add CPU yield)
- SocketSelectTimeout (default: 0, set to 1+ to add CPU yield)
These work perfectly, thanks! Just a simple 1 ms delay in the main loop drops CPU usage to 0% most of the time.
I also looked into the error 122 in the SBBSEXEC input_thread when SVDM gets pushed hard, such as during a file transfer. A little additional information on the next waiting mailslot message makes it pretty clear. Sorry, these are going to wrap oddly:
SBBS: !input_thread: ReadFile Error 122 (space=9411, count=0, nextsize=10000, waiting=46)
SBBS: !input_thread: ReadFile Error 122 (space=1211, count=0, nextsize=5056, waiting=45)
SBBS: !input_thread: ReadFile Error 122 (space=9635, count=0, nextsize=10000, waiting=26)
Etc. There's just not enough space in the ring buffer at the time.
these messages are harmless, the sheer number of them can help thrash a CPU pretty good right at a time when the CPU is busy. I changed the logging to log error 122 at a lower priority so it can be squelched out unless debugging is needed. That further drops the CPU usage when the SVDM is processing a lot of data.
Does your gitlab accept anonymous updates, or can I send you a diff?
Thanks again for all your work on this!
| Sysop: | Coz |
|---|---|
| Location: | Anoka, MN |
| Users: | 2 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 13:41:34 |
| Calls: | 397 |
| Files: | 6,774 |
| Messages: | 241,606 |