User accessing database files in SQL Server - sql

Is it possible in SQL Server to find which users writing data in database files? For example: there are two users SA and microsoft\thomas on an instance. ABC database is accessed by both the users and it has three files ABC1.mdf , ABC2.ndf and ABC3.ldf . Can we find who is writing how much data on which files?
I need to to track users writing heavily on the databases as the disks become full because of them.

So i think its not possible to track which user writing how much data and on which data files.

Related

R- Shiny- SQL- Dropbox: can they work together?

We have a shiny app running from our institute server (basically modifying a table).
We would like to store the table in an SQL DB.
Is it possible to save the SQL DB in a Dropbox account and interact with it from there?
Pseudocode:
load table from Dropbox SQL db
create DT data table
modify the data table in shiny
Update the SQL DB in Dropbox
I am asking for any working examples illustrating the first and last steps above.
Many thanks!!
I'm not sure what you're referring to by "Dropbox SQL db" - does Dropbox have an own SQL service? I believe not. Do you have some kind of SQL dump in the dropbox?
Anyways, what you could do, is setting up a sqlite database and save it to the dropbox. To my knowledge, SQLite does not support concurrent connections, but if only one user is accessing the SQLite db, then it should work.
Check out RSQlite.
/Edit: Of course, the institute server also has to have direct access to the dropbox.

Best way to set up a new database on a new server which periodically refreshes tables from a live SQL Server?

I need to create a database solely for analytical purposes. The idea here is for it to start off as a 1:1 replica of a current SQL Server database but we will then add in additional tables. The idea here is to be able to have read-write access to a db without dropping anything in production inadvertently.
We would ideally like to set a daily refresh schedule to update all tables in the new tb to match the tables in the live environment.
In terms of the DBMS for the new database, I am very easy - MySQL, SQL Server, PostgreSQL would be great -- I am not hugely familiar with the Google Storage/BigQuery stack but if this is an easy option, I'm open to it.
You could use a standard HA/DR solution with a readable secondary (Availability Groups/mirroring /log shipping).
then have a second database on the new server for your additional tables.
Cloud Storage and BigQuery are not RDBMS services themselves, but could be used in this case to store the backups/exports/dumps from the replica, and then have the analytical work performed on those backups.
Here is an example workflow:
Perform a backup and restore in a different database
Add the new tables in the new database
Export the database as a CSV file on your local machine
Here you could either directly load the CSV file in BigQuery, or upload that file in a Cloud Storage bucket previously created
Query the data
I suggest to take a look at the multiple methods for loading data in BigQuery, as well as the methods for querying against external data sources which may help to determine which database replication/export method might be best for your use case.

What is an efficient way to manage a vb form which handles huge data(much higher than 2 gb) with access 2013 as database?

I am currently designing a windows form using vb.net. The internet states that 2 gb is the limit for a .accdb file. However i am required to handle data a lot larger then 2 gb. what is the best way to implement this? Is there anyway i could regularly store data to some other access db and empty my main database? (But would this create troubles in migrating data from accdb to the windows form when demanded by the user?)
Edit: I read somewhere that splitting could help. But i dont see how?- it only creates a copy of the database on your local machine in the network.
You can use Linked table of Microsoft SQL Server 2012 Express edition which has 10 GB limit, the maximum relational database size is 10GB.
You can use MySQL Linked table , 2 TB limitation
It's not easy to give a generic answer without further details.
My first recommendation would be to change DBMS and use SQLite which supports roughly 140 TB Limit
If you must use Access then you will need a master database containing pointers to the real location of the data.
E.G.
MasterDB -> LocationTable -> (id, database_location)
So if you need a resource you will have to query the master with the id to get its actual location and then connect to the secondary and fetch the data.
Or you could have a mapping model where a certain range of IDs are in a certain database and you can keep the logic in code and access the db once.
Use SQL Server Express. It's free.
https://www.microsoft.com/en-us/cloud-platform/sql-server-editions-express
Or, if you don't want to use that, you'll need to split your data into different Access databases, and link to what you need. Do a Google search on this and you'll have everything you need to get going.
I agree with the other posts about switching to a more robust database system, but if you really do have to stay in Access, then yes, it can be done with linked tables.
You can have a master database with queries that use linked tables in multiple databases, each of which can be up to 2 GB. If a particular table needs to have more than the Access limit, then put part of the data in one database and part in another. A UNION query will allow you to return both tables as a single dataset.
Reads and updates are one thing, but there is the not-so-trivial task of managing growth if you need to do inserts. You'll need to know when a database file is about to grow beyond 2 GB and create a new one whose tables must then be linked to your master database.
It won't be pretty.

How to insert data from one Azure SQL Database into a different Azure SQL Database?

I realize that Azure SQL Database does not support doing an insert/select from one db into another, even if they're on the same server. We receive data files from clients and we process and load them into a "load database". Once the load is complete, based upon various rules, we then move the data into a production database of which there are about 20, all clones of each other (the data only goes into one of the databases).
Looking for a solution that will allow us to move the data. There can be 500,000 records in a load file and so moving them one by one is not really feasible.
Have you tried Elastic Query? Here is the Getting Started guide for it. Currently you cannot perform remote writes, but you can always read data from remote tables.
Hope this helps!
Silvia Doomra

How to save images on ftp server or sql db?

For every row which represents clients data (name, phone etc) need to save also 3 images. Is it better saving images to ftp or in sql db?
Images will be shown in bootstrap carousel.
(I'll use asp5-mvc6 with ms sql db)
I would say that if you have a very good infraestructure, save in the DB is better, and here is why I think that:
You can access it in the same way that your other data
No extra setups for serving the media
If you have multiple servers, your sync is done with the DB
But if you have a small app, with small server, putting the media in a folder and keep in the db a reference to the file is not the end of world, but if you have more than one server, remember you will have to replicate the file in the other servers as well.
I would say you should try both ways before making a decision.