Using your code I should add another timer (as you know I asked if there are problems in using many timers...).
Also, one of the useful characteristics of websocket is that there is no need (for client) to poll constantly the connection but with that workaround this feature is (almost) lost (almost, because the code is server code).
For the moment I'm glad that my LAN is not the problem (in this case ).
There are many messages being sent by the client and the server. It is unlikely that this will have any effect on the server performance. You can also change the timer to run every 10 seconds or more. It shouldn't really matter.
I recommend you to focus on getting the solution to work. It will not be slow because of a "ping" request.
The workaround Erel suggested, if I understand well, is not enough.
The timer (tmr in that example) should "run" a client routine but:
1) that routine should exist on client (with a name like "SrvPing")
2) the client should respond, calling a server routine (like "CltPong")
3) another timer is needed, to wait for the "client pong"; when this time is expired we can consider the client disconnected.
All this only because the Disconnected event does not fire (more exactly, the server websocket does not result disconnected).