Terminal Server Stress Test Tool

Terminal Server Stress Test Tool

While QTest is already one of the best quality performance testing tools on the market, QTest WR enables performance testers and quality assurance professionals to record and capture end user actions on desktop thick client, RDP, Citrix based applications replay then en-masse to. Stresslinux is dedicated to users who want to test their system(s) entirely on high load and monitoring the health. Stresslinux is for people (system builders, overclockers) who want to test their hardware under high load and monitor stability and thermal environment. The command prints a 3GB (1000^3 bytes) dummy file full of zeros to stdout on the remote server, which is printed (transferred) via SSH to stdout of the local server and then locally piped to /dev/null (i.e. You can even see the progress of the test while executing it. There are many all in one dedicated benchmarking tool with a pretty GUI available for Linux. But here we are focusing on simple command line tools only and going to test the followings listed bellow. CPU stress testing and benchmarking. Hard drive I/O performance testing. Network performance and speed testing. OpenSSL performance testing. Send some data to your server and observe the data/load generated on your MS GUI or the TEPS console. Note: While you can/may use this tool to generate data to your server, this is a Microsoft tool and will not work on all versions of Windows. Use at your own risk, as this tool is not supported by IBM. Its best to use it on a test system.

  • This topic has 4 replies, 2 voices, and was last updated 1 year, 10 months ago by .
Terminal server stress test tool harbor freight

Server Stress Test Software

Software

Online Server Stress Test

  • Hi everyone.

    We are experiencing some troubles with the NX Terminal Server (NoMachine Enterprise Terminal Server Subscription – Version 6.3.6) when more than 1,500 users do login “at the same time” (this means between a lapse of 30m).

    The problem

    Our users are getting connected from different parts of the country between 9:30 and 10:00 hs. When the users amount reachs about 1,500 connections the Terminal Server goes slow and next users can’t login (we have a total of 2,300 users). Everything fails and current logged in users have slow connections, so we must restart everything and go back to our contingency plan (which is Free NX).

    The first approach of NX support was a “probably I/O” disk problem, that was quickly discarded because we moved the disk to a ramdisk and the problem continued.

    No logs are available because being a productive environment the administrators went for full restart as soon as possible because delays in operations would end in heads being cut.

    I’m trying to debug by stressing the terminal server

    by using guest sessions.

    In order to replicate this productive problem I’ve searched for a way to script multiple sessions and see what happens with the Terminal Server, the first article I’ve read was this one:

    After reading it I enabled guest sessions, generated 100 templates for each guest user and ran a script to launch the NX sessions. Sadly I found that guest sessions were limited by some amount and stopped connecting.

    By creating system users

    Then I’ve enabled the USERDB and PASSWORDDB in server.cfg to create local users but in this case I’m unable to authenticate with the message:

    [server.cfg]

    EnableUserDB 1

    EnablePasswordDB 1

    Error: Cannot authenticate to the requested node

    How I need help?

    I would like to know if there’s a way to extend the limits of guest sessions to 1,500 or more.

    Or how could I enable the system users to be authenticated correctly.

    Best regards.

    I would be interested in hearing more about this issue. These are virtual Linux sessions? How many nodes are you running? Are you using the Terminal Server as a node as well or just a broker for the nodes? Your terminal server being the broker receives all network traffic from the nodes and then routes it out from one interface. I have 310 users logged in right now, and it’s running around 100Mb with a peak of 240Mb. Scaling that up by a factor of 4, you might be hitting a bottleneck here if you have a 1Gb connector or infrastructure.

    Use iftop to watch the traffic from and to the nodes and then the total. If your terminal server is a node, consider turning that off and making it just be the broker. There might be some ways to use heartbeat feature to increase capacities too.

    Also something to consider is that when people log into a server first thing in the morning, they do so because they need to use the computer immediately. So you might have a high number of people opening browsers initially, and then slowly through the day their usage drops.