AWS Website hosting in .NET/SQL - Beginner [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
Little too late for AWS hosting but I guess its never too late :)
I have a .NET MWC based website hosted on a traditional windows server with SQL server. I have about 5000 hits per day and database of roughly 500 gig in size. Monthly traffic is about 50 GB.
I have to migrate this to AWS, what are the steps? I have a simple .NET C# MVC webapp which connects to SQL server. Also, I would like to know how much will it cost to host website with above mentioned requirements on AWS?
Thanks in Advance
Mandy

It will be around $5xx USD per month base on AWS calculator, and $4xx per month for Azure VM + SQL managed Db. If your desire long-term hosting, i will suggest you purchase a proprietary license of SQL server. It will save more!
For reference : One EC2+One SQL Server

You should consider using Amazon Lightsail, which has a single price for the compute (including Data Transfer), and a single price for the Database (including storage). However, Microsoft SQL Server is not available with Lightsail.
The alternative is to launch:
An Amazon EC2 instance for your application
An Amazon RDS database for your SQL Server
The price will vary based upon the size of instance and database you choose. Storage is an extra cost, based on how much you allocate.

Related

Azure sql server backup to Azure blob - CherrySafe replacement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am new to Azure SQL server and trying to understand how do I backup Azure SQL databases to Azure blob few times a day.
My company is currently using [Cherry Safe][1] to backup Azure SQL Databases but Cherry Safe is shutting down in 2 weeks.
As I read more about it, it seems like I can configure export to Azure blob but I do not see that option. I see history of exports but I do not know where is the configuration to schedule or change it.
For long term retention, I see an option to configure retention vault.
Are there replacement services out there that Cherry Safe users are using?
Do I need an external service or I can configure the backups myself?
Thanks.
It seems that you could backup sql server to azure blob storage, not the azure sql server.
You could use SQLBackupAndFTP to backup Azure sql database to local machine.
1.Connect SQLBackupAndFTP to the logical SQL Server in the Azure
2.Create a job for regular Azure SQL Database backup
Also as you have said, you could use long-term backup retention to backup.
It allows you to preserve weekly, monthly, and yearly backups for an extended period of time up to 10 years.
Restore a database from a specific long-term backup if the database has been configured with a long-term retention policy. This allows you to restore an old version of the database to satisfy a compliance request or to run an old version of the application. See Long-term retention.
You don't need to use any third party to backup Azure SQL database, they have much better backup and retention tool on portal. Read this https://learn.microsoft.com/en-us/azure/sql-database/sql-database-long-term-backup-retention-configure

Streaming from Oracle Tables to Redshift [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to Redshift and AWS eco-system. I am looking for options or best practices for streaming data changes from Oracle EBS tables on premise to Redshift.
Should S3 be used as the staging area? Thus Oracle->S3->Redshift? Is that good for real-time streaming Oracle tables to Redshift?
Any way to bypass S3 staging and do Oracle->AWS Kinesis(Firehose)->Redshift? If so, are there such scenarios I can read-up on?
What about using Kafka instead of AWS Kinesis?
Can AWS Kinesis or Kafka pull directly from an on-premise Oracle instance?
Are there other alternatives/components, ETL tools for near real-time or almost real-time data load to Redshift?
There is a large number of tables to stream from Oracle, which is on-prem. I am new to Redshift but familiar with Oracle, SQL Server, PG. Sorry if I am totally of beat here.
Please help :) Any thoughts and/or references would be highly appreciated...
As per docs here, 1 and 2 are the same, essentially. You won't bypass S3 by using firehose, just mask it. And Firehose is currently useless if you have lots of tables on more than one cluster. Unless of course you plan to automate the process of sending support requests to AWS Support to increase limits (I was thinking about it, don't laugh).
I'd go for loading using COPY command from S3.
Inserts are currently slow, and I mean SLOW. Do not use methods that would generate insert statements under the hood.
My use cases:
Apache Storm streaming events to Redshift, using S3 as staging area. Works fine for hundreds of thousands of events per table per day, dozens of tables per database, several databases per cluster, couple of clusters. We use API Gateway, AWS Lambda and S3 as staging area for second process. Works just as well for tens of thousands of events per day, couple of different clusters, several databases on each cluster, one table is loaded this way in each database.
You can, in theory, issue COPY command using SSH, but then you have to prepare manifest files on (wait for it) ... S3. So I have no idea why you wouldn't use S3 for staging data storage anyway.
As for streaming data from on-premises Oracle to S3, it's a whole different topic entirely, and you should look for answers from someone proficient with Oracle. I'd look at CDC, but I'm not an Oracle pro, so can't tell if this is a good approach.
I hope this helps.

Why does sql require a server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm new to SQL, and i'm trying to understand something basic about it.
Why do we need a server to connect to when using SQL?
In my very narrow vision of it, it just uses some databases, which could be implemented as arrays for example (or whatever it is that is implemented "backstage").
For example, if I want to set up a table in my computer and do some operations on it, what usage does the server has? Why can't it "just be there"?
I think the reason for your confusion is too narrow interpretation of the word "server" as a separate hardware box.
A server does not need to run on separate hardware, or even in a separate virtual environment. It could be another process on the same computer, or even a library within your process. What makes it a server is an ability to accept and process requests from clients. It does not matter where the server runs physically: as long as you follow a protocol in which requests originate on the client side, you have a server.
What you're envisioning (roughly) is referred to as an in-process database and they do exist for SQL. SQL Server is set up to be used by multiple users or applications so it makes sense for it to be a central server that many clients can connect to so they can share the same data.
If you only want to process data locally, there is SQL Express LocalDB, SQLite and a few others that allow you to essentially embed a SQL engine inside your application.
You are, perhaps, confusing SQL the language with SQL Server, a Microsoft product that implements SQL.
SQL itself has many, many implementations. Many of those implementations do not use a server. MS Access, SQLite, FileMaker are common SQL-using products that rely on file-sharing rather than a client-server setup to provide multi-user access. These products can all also be used on a single machine without sharing files.
There are also implementations of SQL that use CSV files for storage although these are less common.
Finally, many of the client-server SQL products offer related, smaller-scale implementations that do not require a server. These are generally implemented using file-sharing as well.
It depends on what you are needing. Some instances of SQL, such as SQLite3, are local and file based. They have no server. Most provide a server because of the problem they address.
But lets address why a server is needed. Consider a Microsoft Access application where the databases are files shared over a network. Suppose 5 people are working with the same file. They each search for something, and the entire file must be passed over the network. Suppose one edits a record. The next time the others do a search, they will have to load the entire file again. If the file is large this is a huge performance hit. This is why servers were created.
A server receives only the SQL. The server does either a search, or an edit, and returns only the data that has been requested. For any database of reasonable size the performance improvement is huge.
Another benefit of a server is access control. With a server you can have multiple accounts and control what databases and even tables they have access to, and what activities they are allowed to perform.
In short, the server was created to address the problems that arise when you have multiple clients working with a single database.

Whats the correct way to licence SQL Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I can not understand how Microsoft licensing works.
Here is the system I am trying to cost.
Windows Server 2008 or 2012
SQL Server Standard
ASP.NET MVC 4 App 20 users
SSRS 20 users
SSIS data inports
So do I need 20 Cals for SSRS?
Do I need 20 Cals for the MVC app which is using the SQL Server?
Does SSIS need a Cal?
If you are running a public website using the SQL Server as a backend, you need to use Per Core licensing (or Per Processor licensing prior to SQL Server 2012). With this license, you do not need any CALs.
If you are saying you have an internal web application being used by 20 users, then you have several options and you'll have to figure out which is cheapest (keeping in mind future growth).
You can choose the Per Core licensing as described above and all your licensing needs will be met. Note, you need one license per processor core on your server. You did not specify how many cores your system has. If you have a backup server for fail-over, you will need additional licenses for that as well.
Alternatively, you can choose the Server+CAL model, which is the more confusing approach. First, you'll need a server license. Then, you have to determine how many "users" you have. For this purpose, a "user" is either a specific person (user) or a specific machine (device). You will need one CAL for each "user". If either the machine or the person logged in at the machine has a CAL license, they may use the SQL Server for any purpose, whether it be SSRS, SSIS, a web application, or whatever.
If you have backend systems (servers not being used by a person) that need to connect to the SQL Server to perform automated data imports or any such thing, those systems will need device CALs. If you have a traveling sales-person who needs to use the web app from anywhere, that person will need a user CAL. If you have typical cubicle employees, you can license their machine with a device CAL, or you can license the employee with a user CAL, it's up to you.
Edit:
You'll also need a Windows Server license of course. I think that's obvious, but you never know...

Working of Login System in Large Applications [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am crazy to know that, how login system works in large applications like Facebook, Gmail, youtube, yahoo etc. Once after entering credentials server is responding more quickly. How is that possible ?
There must be more db servers for storing user information. So my question is
How they look for authentication information over more db servers?
Do they look over all the db servers to check for a particular user and if so how it is responding more quickly ?
Do they allocate db server based on geographical location of the user ?
And do they also have more application servers and how these are interconnected with each other.
RDBMS have the functionality to link servers that issue distributed queries, updates, commands, and transactions on heterogeneous data sources.
The database system will use some form of cached information about the user, in SQL Server an execution plan is stored and used when a query is executed. The database management system will decide which execution plan to take in order to generate the fastest results or use a cached data set. Note: Google, Facebook, Amazon etc will lot of server processing power behind the scenes which will make it seem instantaneous. They will also have dedicated teams to manage their databases, perform indexes, tuning, optimization and identify bottlenecks.
The geographical location of the server could be a factor. The closer the server is to the user the faster they can get the information but IMO this would be a matter of nano/milli seconds difference depending on where their data center is located. If the server gets too busy then the load balance will migrate you/other users to a server with more available resources.
Yes. Using more than one web server is needed in scenarios like this and is tied in to part 3 of the question, which server you hit depends on how much available resources the closest server has and if it will accept your connection. They are distributed but the whole process seems transparent to the user, i.e. they think they are using the same server as every other client. They can be interconnected by using session management, Web Services and other interoperability techniques and technologies.