I have inherited an Access database that has linked SQL tables. I need to test the network traffic that is caused by the execution of the Db. I need to ascertain which parts of the system cause the most Network traffic and therefore are the slowest.
I am not an access guru so ive struggled doing what was suggested, which is : have Task Manager open at the Networking tab.
Then Step in into the app and looking at where there is a significant rise in Network traffic. But this seems rather unreliable and time consuming.
Does anyone have any ideas how I can achieve my goal in Access?
If you really need to analyze the network traffic then you should probably get to know WireShark well enough to do a capture that is filtered on the traffic between the client and the SQL server.
Related
I am trying to update a legacy system's sql solution to use the cloud.
The solution today involves a customer Windows SQL server installed onsite, then various machines are configured to connect to that IP Address / Port / Server Name. When they do connect the machines will set up any tables that are missing and regularly send their data. Data rates are low for an individual machine. Roughly one write request ever 10 seconds (it varies a lot), no more than 2-3k of information on each write request.
Moving this to the cloud is tricky mostly because each of the machines do not have a unique identifier. The good news is that we have the legacy machines connected to a IOT Gateway (Just think RPI) that knows a unique machineId. Furthermore the IOTG is a full fledge computer but not too powerful of one, and its Disk is an SD card.
New and Old Network Layout
So far I have had a few things fall on their face.
1) Setting the Machine to think the DB's IP/Port is that of the IOT Gateway. Setting up an Express server on the IOTG, listening, then injecting the unique id into the queries that I'd proxy up to the cloud. I may have had a bug, but for some reason I couldn't even see the requests coming in on the port. Even if I could I'd still have to figure out how to decode them. Shouldn't I at least be able to see these requests coming in?
2) Started looking into SQLite. The idea being to have SQLite listen on the port as an actual DB then have a process in the IOTG query data out of SQLite, append a unique ID, and then send it to the cloud. Unfortunately SQLite does not listen on a port.
I am starting to looking at just installing a whole SQL server on the device, but I'd really like to avoid that. I'm pretty sure its fairly large and writing to disk is not advisable for a small embedded system like I'm running.
Generally my questions boil down to:
1) Should I be able to see SQL Queries in an express server?
2) Should I be using a different tech? I failed to find a different more sql specific proxy.
3) Am I correct to think that the SQLite path is dead? Even if I could find a way to attach it to a port there is still not going to be any sort of response from SQLite when the clients try to make a connection.
4) Am I wrong to fear the local server? Diving into some documentation for making express work with DBs gets me to here: https://www.microsoft.com/en-us/sql-server/developer-get-started/node/ubuntu/ which suggests 4GB of memory, we're working on 0.5GB.
Any other thoughts on how to approach this would be great.
Just had a bizarre issue with SQL Azure, and it's happened in a small phase just before full go live with some users doing some data entry.
"Database 'dbname' on server 'xxx' is not currently available. Please rety the connection later. If the problem persists, contact customer support."
When I tried to connect via SQL Azure database website I got:
"Firewall check failed.
Resource ID : 1. The request minimum guarantee is 0,
maximum limit is 180 and the current usage for the database is 0.
However, the server is currently too busy to support request greater than 0 for this database."
Looking at the databases section of the Azure Management website the site reported it couldn't access the DB, but I didn't capture the exact error message unfortunately.
Bizarrely, a couple of my users were still able to login to our system website that access the DB, and view and save data. Eventually they lost connection too however.
After an hour or so, the databases came back to life and we could fully access them again.
I have looked at the servers master db event table using queries from here and there was a couple of connection failures but nothing interesting. No throttling or deadlocks, a couple of failed connections that said "Client may have timed out when establishing connection. Try increasing the connection timeout." in the description
Any ideas where else to look?
Business users have had a massive drop in confidence because of this.
What your describing normally occurs because of :
1) SQL Connection limit being hit. Assuming you don't see this often you unlikely to be the cause. But worth checking putting a limit on your connection pool can help.
2)You neighbours being extremely noisy and thus the node re-adjusts.
3) Hardware failure and Microsoft bringing your database back online in a different node. This can take some time.
Normally I have seen this when Microsoft have throttled or had problems with a box and had to recover everyone over. Because you are on a shared system you have to keep in mind that they are recovering everyone else also in that node also and thus sometimes this takes time.
The best bet if you are worried and need to get a resolution for the business is to open a support ticket with MS and give them the time and error message you saw this. They will investigate and generally they have really good back end telemetry that will point to a reason. This will allow you to give the business a resolution and then you can make a call on future plans and contingencies. You have to keep in mind though that SQL Azure is shared system and transient errors can happen, you might need to design more failover into your designs.
First let me say i am only a novice programmer, and by no means an sql guru. We have an app at work that is and has been under heavy dev from the vendor for sometime (2+ years). It runs as a MSSQL instance on one of our servers, and there is a client install for the desktops. The client software is making direct sql calls to the database.(it also has a local mysql instance to handle the client settings) there is 6-12 ports that had to be opened up for the communication. Looking at the sql manager, i can see direct sql calls from various clients.
Seems to me this is entirely the wrong approach. the closest thing i have done to this, was a webpage + php+ mysql. The webpage would make requests, and all the processing would be serverside, then simply display the results. The sluggishness my users feel i think is from the clientside request+ processing of the sql data.
ps: i realize that if they have not done it by now, switching to another paradigm seems out of the question. i just want to know if i am way off base.
You are way off base.
The client side has much more processing power.
Consider the case of one server and 5 clients. Even is the server has 3 times the power of a client the clients as a whole are still 5:3 more powerful.
If the application is sluggish it was probably poorly written. You need to investigate the root cause. Client / Server is a leading practice in design, I'm guessing it is not the root cause. It might be badly implemented or there might be other reasons. Your comment about having a local mysql sounds very fishy to me -- there should be no need for this.
I have a .NET application (VB.NET) that runs against a MS Access database. Every data request connects to the access database, runs and returns the query and closes the connection back again.
I placed the database on a windows xp 32-bit machine.
I have two clients on which I installed the .NET application. Both clients are running windows 7 professional 32-bit.
Now I have a performance problem with this.
When I use the first client it runs fine. All data is shown very fast. When I than use the second client, it takes some 10 seconds to connect to the database, fetch the data and close the database connection. When i ask for other data on that second client, it all runs fine, until I request data from the first client than back again. Than it takes again 10 seconds on the first client before my data is fetched.
Can anybody please help me with that? I owe a Belgian beer to the solver of this issue ;-)
Thanks!
Tom Wickerath wrote a great article on improving multiuser performance for MS Access applications. While his article assumes a MS Access front-end, many of the tips should apply to a .Net application. I recall two points that might help you:
Keep a persistent connection to the back-end
Use (short) UNC paths instead of mapped drives
After a long search, i found it out... My virusscanner NOD32 was causing this, most probably by excessive scanning inbound and outbound network traffic.
I'm not sure stackoverflow is the right place for questions like this, but ...
It sounds like the first process is locking the file, so the second process has to wait.
"Use SQL Server" isn't a completely flippant response - SQL Server is specifically designed to handle concurrency issues like this.
IMHO ...
PS:
This is a pretty lame link, but it might help:
http://office.microsoft.com/en-us/access-help/about-sharing-an-access-database-on-a-network-mdb-HP005240860.aspx
PPS:
Here's a somewhat better link, with some suggestions for things you can do to improve concurrency:
http://www.softcoded.com/web_design/upgrading_access.php
I have the need to access a sybase database (12.5) from oversea. The high latency is definitely a problem.
I already optimized the connection parameters to make better use of the network and achieved a 20x performance increase, but it's still not enough : 1 minute to get 3Mb of data.
We need another 10x or 20x increase for our application.
Technical data :
the data are flowing through a single TCP connection using the TDS protocol
the client app is an excel sheet with macros, using the default Sybase driver
the corporate environment makes it difficult to push big changes in the 10+ years architecture, so solutions need to be the least intrusive. But some changes may be bargained due to the importance of this project.
Can anyone give me pointers ?
I already thought of :
splitting SQL requests over several concurrent connections to the database. The problem is data consistency : what if records are modified at the same time since requests will not be exactly executed at the same time ? Is there an existing mechanism to spread a request over several calls on different connections ?
using some kind of database "cache" or "local replication" oversea, but I don't know what is possible.
Thanks.
Try to install local database (ASE or ASA) and synchronize this databases with Sybase Mobilink (or Sybase Replication Server if you need small replication latency and you have a lot of money).
(I know I answer to my own question)
Eventually, we settled to designing our own database remote access protocol. It's not complicated since we are only using a basic subset of SQL (SELECT and UPDATE), and the protocol doesn't have to understand SQL anyway.
By using our own protocol, we'll be able to use compression, make the client able to use several TCP links at the same time, maximize network utilisation and add some functionnal caching secific to our application.
The client will be our app and the server will be a "proxy" to the real database, sitting next to it (like #Tim suggested in the comments).
It's not the only solution, but we feel that it's a good balance between enormous replication price, development complexity and expected benefits.