I'm creating a socket io application that has many rooms and many users inside of these rooms. I needed a way to store temp data about the room and its users. (socketio can store data about each socket connection but not the room) I thought about using a HashMap on the server (key: room) => (val: tempData) but thought that would be too much overhead for Node.JS since it's single-threaded. So, I decided to just store temp data in a Postgres table. Temp data get added and deleted often. There are a few boolean values, etc.
The essence of my question: Is it okay for me to store temp data on Postgres or would it be better to just store it in a HashMap on Express Server?
Thank you!
If you have 1 server then keep it in memory
If you have more than 1 server OR want the data to stick around if the server is restarted hold it in the DB
Related
I'm absolutely a newbie using redis.
I need to:
list all databases
list all data structures
I've connected to redis 4.0.11 server using redis-cli.
Redis is a key value storage not a database, You can't query or structure the Redis like you do in a database. You can only receive the relevant value from the key that you are passing.
Usually instead of database a key value storage like redis is used to to high performance key value storage and retrieve, if performance of a database is not enough.
Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;
I have a table in a SQL Server database which contains large amount of data, around 2 million records (~approx 20 columns for each row). The data in this table gets overridden at the end of each day with new data.
Once the new data is available I need to copy this data from the SQL Server database to a MongoDB table.
The question is on the way by which it can be achieved the fastest?
Some options :
A simple application that reads and writes
Some sort of export/import tool.
Generating a\multiple file\s from SQL and then reading concurrently to import in MongoDB
From my expirience:
A simple application that reads and writes.
Will be the slowest.
Some sort of export/import tool.
Should be much faster than the first option. Take a look at the bcp utility to export data from SQL and then import data with mongoimport. However, the way you store data in mongo might differ a lot from the SQL one so it might be quite a challenge to do the mapping with export/import tools.
Generating a\multiple file\s from SQL and then reading concurrently to
import in MongoDB
Paralleling might speed up the procees a bit but I don't think you will be satisfied with the results.
From your question the data gets overriden at the end of each day. Not sure how you do it now but I think it makes sense to write data to both SQL and Mongo at that time. This way you won't have to query the data from SQL again to update Mongo. You will be just writing to mongo at the same time you are updating SQL.
Hope it helps!
I am writing a set of stored procedures that aggregate data from large datasets.
The main of stored procedure makes a call to another server(s) where the data is located. The data is calculated in steps and stored in multiple temp tables (currently global temp tables) and then pulled to the server I'm sitting on (this is done because of the way the linked servers are setup).
Right now I'm trying to write dynamic SQL to create temp tables with a unique identifier because multiple people may run the stored procedures at the same time. However because of the number of sub-steps to this process its getting complex so I'm wondering if I'm over thinking it.
My question is if I simplify and just use local temp tables will I run into problems because the tables will have the same name? NOTE: Users may have same login user names.
Temp table names are per-session. When you call SqlConnection.Open you get a new session. Normally, applications do not share sessions between HTTP requests. Neither is this a common thing nor is this a good thing.
I don't believe you have a problem. If you get name clashes then you should fix the application to not share sessions in the first place.
I need to push a large SQL table from my local instance to SQL Azure. The transfer is a simple, 'clean' upload - simply push the data into a new, empty table.
The table is extremely large (~100 million rows) and consist only of GUIDs and other simple types (no timestamp or anything).
I create an SSIS package using the Data Import / Export Wizard in SSMS. The package works great.
The problem is when the package is run over a slow or intermittent connection. If the internet connection goes down halfway through, then there is no way to 'resume' the transfer.
What is the best approach to engineering an SSIS package to upload this data, in a resumable fashion? i.e. in case of connection failure, or to allow the job to be run only between specific time windows.
Normally, in a situation like that, I'd design the package to enumerate through batches of size N (1k row, 10M rows, whatever) and log to a processing table what the last successful batch transmitted would be. However, with GUIDs you can't quite partition them out into buckets.
In this particular case, I would modify your data flow to look like Source -> Lookup -> Destination. In your lookup transformation, query the Azure side and only retrieve the keys (SELECT myGuid FROM myTable). Here, we're only going to be interested in rows that don't have a match in the lookup recordset as those are the ones pending transmission.
A full cache is going to cost about 1.5GB (100M * 16bytes) of memory assuming the Azure side was fully populated plus the associated data transfer costs. That cost will be less than truncating and re-transferring all the data but just want to make sure I called it out.
Just order by your GUID when uploading. And make sure you use the max(guid) from Azure as your starting point when recovering from a failure or restart.