• SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?

    From Fzf@VERT/FQBBS to Digital Man on Mon Mar 25 14:18:58 2024
    Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
    By: Digital Man to Fzf on Mon Mar 11 2024 07:32 pm

    The latest and greatest sbbsexec.dll and dosxtrn.exe can be find in the nightly builds of Synchronet for Windows:

    Thank you for looking into this. I apologize for taking all this time to get back to you. Real life sometimes intrudes.

    How did you determine the read/writes were "nonsense"?

    As I looked into what was going on I moved to lower and lower level diagnostic programs until I finally just wrote my own to know exactly what was being done on the program side.

    Would happy to try address whatever issues with the UART emulation aren't working for you, but please update to the latest and get new/updated debug log output and share with me.

    I had downloaded the latest SBBSEXEC.DLL the morning after you made the initialization change and have tried it out. It's working 100% along with the version downloaded today! Pushing the UART hard also no longer creates any errors or even any unusual debug log entries. Thanks again for fixing this.

    There are a couple of other issues I would like to mention.

    1. When SVDM uses an inherited socket (the -h option) no telnet negotiations are done. As a result, the connection is assumed to be in ASCII mode and server side CR characters are translated to CR/LF. Since most programs are already transmitting a CR/LF this gets translated to CR/LF/LF with the expected results. When using an external socket in telnet mode, could SVDM set the
    telnet.local_option and telnet.remote_option variables as so:

    A. Assume both remote and local have already suppressed GA and set the two
    options accordingly

    B. Set the remote telnet echo option to off and set the local telnet echo
    to follow the ServerEcho option from the .INI file

    C. Set both remote and local BINARY_TX options to follow the ServerBinary option from the .INI file

    I don't think it's unreasonable to assume these have already been set up when the telnet connection was initially made. If someone really wants to change the behavior they could still do so by using the .INI file options mentioned. The GA and echo options probably make no difference now but leaving them unset might cause trouble somewhere down the line.

    2. Can anything be done to reduce the CPU usage?

    3. The VDMODEM isn't importing target_ia32.props and thus is using SSE2 instructions. SSE2 has been around for a little while now and it's fair to assume most everyone has it available. However, BBS users tend to be more likely than the average person to be using what I'm going to lovingly call 'period correct hardware'. Many of the CPUs from that era don't have such advanced instruction sets. Could the instruction set extensions be changed by using the target_ia32.props file? I do realize this may be in slight conflict with #2 above.

    Thanks yet again for all the work you've done on this and for fixing the issue I was having.

    ---
    þ Synchronet þ The Fool's Quarter - fqbbs.synchro.net
  • From Digital Man@VERT to Fzf on Mon Mar 25 16:27:27 2024
    Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
    By: Fzf to Digital Man on Mon Mar 25 2024 02:18 pm

    As I looked into what was going on I moved to lower and lower level diagnostic programs until I finally just wrote my own to know exactly what was being done on the program side.

    Would happy to try address whatever issues with the UART emulation aren't working for you, but please update to the latest and get new/updated debug log output and share with me.

    I had downloaded the latest SBBSEXEC.DLL the morning after you made the initialization change and have tried it out. It's working 100% along with the version downloaded today! Pushing the UART hard also no longer creates any errors or even any unusual debug log entries. Thanks again for fixing this.

    The only problem I fixed after reading your message was the issue with the UART not defaulting to COM1 parameters in all cases. Any other fixes you observed were likely already made in the git repo (and would've been included in the nightly builds of sbbs-win32).

    There are a couple of other issues I would like to mention.

    1. When SVDM uses an inherited socket (the -h option) no telnet negotiations are done. As a result, the connection is assumed to be in ASCII mode and server side CR characters are translated to CR/LF. Since most programs are already transmitting a CR/LF this gets translated to CR/LF/LF with the expected results. When using an external socket in telnet mode, could SVDM set the telnet.local_option and telnet.remote_option variables as so:

    A. Assume both remote and local have already suppressed GA and set the two
    options accordingly

    B. Set the remote telnet echo option to off and set the local telnet echo
    to follow the ServerEcho option from the .INI file

    C. Set both remote and local BINARY_TX options to follow the ServerBinary option from the .INI file

    I don't think it's unreasonable to assume these have already been set up when the telnet connection was initially made. If someone really wants to change the behavior they could still do so by using the .INI file options mentioned. The GA and echo options probably make no difference now but leaving them unset might cause trouble somewhere down the line.

    I'll be committing a change here to address that - basically send the Telnet commands to re-negotiate those operating parameters (the same sequence that happens when answering an incoming Telnet connection).

    2. Can anything be done to reduce the CPU usage?

    I added 2 new .ini settings for you to play with:
    - MainLoopDelay (default: 0, set to 1+ to add CPU yield)
    - SocketSelectTimeout (default: 0, set to 1+ to add CPU yield)

    Perhaps one or both of these should default to a non-zero value, but I'll let you experiment with them and let me know your results are.

    3. The VDMODEM isn't importing target_ia32.props and thus is using SSE2 instructions.

    That was unintentional - fixed.

    Thanks yet again for all the work you've done on this and for fixing the issue I was having.

    Thank you! Your feedback is valuable,
    --
    digital man (rob)

    Synchronet "Real Fact" #114:
    Weedpuller "Geographic" http://youtu.be/cpzBDVgmWSA
    Norco, CA WX: 61.8øF, 51.0% humidity, 4 mph WSW wind, 0.00 inches rain/24hrs ---
    þ Synchronet þ Vertrauen þ Home of Synchronet þ [vert/cvs/bbs].synchro.net
  • From Fzf@VERT/FQBBS to Digital Man on Wed Apr 10 20:52:26 2024
    Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
    By: Digital Man to Fzf on Mon Mar 25 2024 04:27 pm

    1. When SVDM uses an inherited socket (the -h option) no telnet
    negotiations are done.
    I'll be committing a change here to address that - basically send the Telnet commands to re-negotiate those operating parameters (the same sequence that happens when answering an incoming Telnet connection).

    It addresses the local configuration but unfortunately it still doesn't set remote options. The remote is usually going to be in binary mode but SVDM has the remote option set to ASCII by default. A CR from the remote then gets held up until a second byte is sent.

    Sending a DO TX_BINARY near the WILL TX_BINARY when in ServerBinary mode and sending a DONT TX_BINARY when not in ServerBinary but using an external socket sets the remote options to appropriately match what SVDM is expecting. Clients might not like having their TX binary mode turned off mid session, but if someone is disabling binary mode on the server side they are already doing something weird.

    It also sets the remote to binary when SVDM answers in listen mode. At the moment it leaves the remote TX in ASCII at all times.

    I added 2 new .ini settings for you to play with:
    - MainLoopDelay (default: 0, set to 1+ to add CPU yield)
    - SocketSelectTimeout (default: 0, set to 1+ to add CPU yield)

    These work perfectly, thanks! Just a simple 1 ms delay in the main loop drops CPU usage to 0% most of the time.

    I also looked into the error 122 in the SBBSEXEC input_thread when SVDM gets pushed hard, such as during a file transfer. A little additional information on the next waiting mailslot message makes it pretty clear. Sorry, these are going to wrap oddly:

    SBBS: !input_thread: ReadFile Error 122 (space=9411, count=0, nextsize=10000, waiting=46)
    SBBS: !input_thread: ReadFile Error 122 (space=1211, count=0, nextsize=5056, waiting=45)
    SBBS: !input_thread: ReadFile Error 122 (space=9635, count=0, nextsize=10000, waiting=26)

    Etc. There's just not enough space in the ring buffer at the time. While these messages are harmless, the sheer number of them can help thrash a CPU pretty good right at a time when the CPU is busy. I changed the logging to log error 122 at a lower priority so it can be squelched out unless debugging is needed. That further drops the CPU usage when the SVDM is processing a lot of data.

    Does your gitlab accept anonymous updates, or can I send you a diff?

    Thanks again for all your work on this!

    ---
    þ Synchronet þ The Fool's Quarter - fqbbs.synchro.net
  • From Digital Man@VERT to Fzf on Wed Apr 10 23:24:33 2024
    Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
    By: Fzf to Digital Man on Wed Apr 10 2024 08:52 pm

    Re: SVDM - Which SBBSEXEC.DLL and DOSXTRN.EXE version?
    By: Digital Man to Fzf on Mon Mar 25 2024 04:27 pm

    1. When SVDM uses an inherited socket (the -h option) no telnet
    negotiations are done.
    I'll be committing a change here to address that - basically send the Telnet commands to re-negotiate those operating parameters (the same sequence that happens when answering an incoming Telnet connection).

    It addresses the local configuration but unfortunately it still doesn't set remote options. The remote is usually going to be in binary mode but SVDM has the remote option set to ASCII by default. A CR from the remote then gets held up until a second byte is sent.

    Sending a DO TX_BINARY near the WILL TX_BINARY when in ServerBinary mode and sending a DONT TX_BINARY when not in ServerBinary but using an external socket sets the remote options to appropriately match what SVDM is expecting. Clients might not like having their TX binary mode turned off mid session, but if someone is disabling binary mode on the server side they are already doing something weird.

    It also sets the remote to binary when SVDM answers in listen mode. At the moment it leaves the remote TX in ASCII at all times.

    That's a good point. I forgot that we need no negotiate send and receive separately. I just committed a change that does the BINARY_TX negotation for both sides, always (disabling, when server_binary option is disabled). Hopefully this doesn't disrupt anyone else's existing use of SVDM (likely not, I don't think it's getting a lot of use yet).

    I added 2 new .ini settings for you to play with:
    - MainLoopDelay (default: 0, set to 1+ to add CPU yield)
    - SocketSelectTimeout (default: 0, set to 1+ to add CPU yield)

    These work perfectly, thanks! Just a simple 1 ms delay in the main loop drops CPU usage to 0% most of the time.

    Good to know. I'll just make the default MainLoopDelay 1 then.

    I also looked into the error 122 in the SBBSEXEC input_thread when SVDM gets pushed hard, such as during a file transfer. A little additional information on the next waiting mailslot message makes it pretty clear. Sorry, these are going to wrap oddly:

    SBBS: !input_thread: ReadFile Error 122 (space=9411, count=0, nextsize=10000, waiting=46)
    SBBS: !input_thread: ReadFile Error 122 (space=1211, count=0, nextsize=5056, waiting=45)
    SBBS: !input_thread: ReadFile Error 122 (space=9635, count=0, nextsize=10000, waiting=26)

    Etc. There's just not enough space in the ring buffer at the time.
    these messages are harmless, the sheer number of them can help thrash a CPU pretty good right at a time when the CPU is busy. I changed the logging to log error 122 at a lower priority so it can be squelched out unless debugging is needed. That further drops the CPU usage when the SVDM is processing a lot of data.

    Are you running DebugView or something similar that's capturing these log messages? That would explain the additional CPU usage.

    Does your gitlab accept anonymous updates, or can I send you a diff?

    No, you need an account on gitlab.synchro.net to submit a merge request. You could send me a diff, but a merge request is preferred. That said, a patch/MR that just lowers the severity of that log message probably wouldn't be accepted without more justification (i.e. running without DebugView or equivalent, should see no CPU impact by those log messages).

    Thanks again for all your work on this!

    Thank you for the test reports and suggestions!
    --
    digital man (rob)

    This Is Spinal Tap quote #18:
    Sustain, listen to it. Don't hear anything. You would though were it playing. Norco, CA WX: 63.5øF, 49.0% humidity, 0 mph ENE wind, 0.00 inches rain/24hrs ---
    þ Synchronet þ Vertrauen þ Home of Synchronet þ [vert/cvs/bbs].synchro.net