Pizza Roadie:
(I wonder if they need faster “gigabit” network equipment? Maybe it’s a Windows XP network issue more than it’s a Foodtec issue?
This should simply be impossible. I assume you’re running a 100mb network. Your networking equipment isn’t an issue. The amount of data you’re actually pushing through the system is quite low (assuming it’s a client-server app). There are three reasonable possibilities…
- You’re using a HUB instead of a switch (doubtful since hubs actually cost more than a switch) and getting lots of collisions. Also, a good switch will allo you to run at full duplex rather than half duplex (half duplex is required by hubs due to collisions).
- Your server machine (the one holding the database, not as in “waiter”) is overloaded and either the disk is too slow or the RAM is too low or the CPU is getting hammered.
- The database isn’t optimized and isn’t performing the way it should.
For #1, get the model number of the networking device that connects everything and google it. A “switching hub” is a switch and not a hub.
For #2, on the database server, click start, run, perfmon . Right click in the graph area and select properties. Go to the general tab and change "sample automatically every x seconds to every 600 seconds (10 mins). This will allow you to graph 16 hours and 40 mins. Select the following counters. They may already be populated, and if so, just change the sampling interval. Memory, Pages/Sec; physical disk, avg disk queue length; processor, %processor time. For the last two, select all instances. Minimize it and go about your business. At the end of the day (or periodically), pull it back up (restore or maximize). Hit control-h to highligh a particular line. Click down in the bottom right area (where the counters are listed) and scroll up and down. Pages/sec should be 5 or less per disk. Avg disk queue length should be below 1 for the avg. It may spike a little, but not over say 10-20. %processor time should avg well below 50%. It might spike to 100% periodically, but not for any extended period of time.
If you need more granular data, simply start it at the time you anticipate the heavy rush and set the sample time to every 60 seconds, but know that you will only have up to 1 hour and 40 mins of the most recent data. So, if you start it at 5pm and don’t check it until 10pm, you only get from 8:20 to 10pm visible.
If all of those counters seem reasonable, then I suspect logical or physical issues with the database (either structural/data issues or simply poor optimization).
You can also spot-check things by pulling up task manager (by right clicking on the taskbar, and selecting task manager). Go to the performance tab and look at “physical memory”. There should always, always, always be 10,000 plus available (that’s 10megs). That number could be significantly higher. You will also see CPU usage (very short time period) and page file size.
A simple defragment on the data volume could do wonders as well.