I have two servers - one Windows server with SQL Server Express and one Linux server.
On the Linux server I have a shell service which is waiting for a new folder. After something is added it checks if it's OK and after that it should create a new record, for example in table customer it should create a new customer.
I already have the first part but I donĀ“t know how to get the data from the shell script to the SQL Server.
You could follow the steps below
Setup a share on the Windows server accessible to the Linux server
Have your Linux script generate a CSV file of the data to be inserted and push it to the Windows server share via SMB.
Write a Windows batch file or powershell that you setup as a scheduled task on whatever interval you want that iterates over each file in the Windows directory dropped by the Linux process and calls BCP to insert the data.
Move the processed files to an archive directory as part of the windows batch file.
For documentation on using BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx
Related
I'm able to connect and interact with a hive database on the Putty terminal. But I'm clueless about how to do it on Datagrip.
These are the steps that I'm using for Putty:
Starting a session through SSH-type on the host(HostName) and port(22), which opens a terminal, and there I feed my login details.
Then, I invoke a batch script on the remote server ssh session which then calls other .sh scripts, this step sets the path for various environment variables and defines a lot of hive configurations. At completion of this batch file I can see a "hive>" on the terminal, indicating I can run sql queries now.
Is there any way, I can get Datagrip working in this environment and setup driver location, work directory, home directory, everything on the remote server. And call this batch script from Datagrip itself.
So far, I developed cms/blocks on staging server directly, therefore I have no any local dev environment.
Suddenly server HDD was crashed and i failed to restore data on server.
All my codes have been managing by git but can't find any certain files that are related to cms/blocks
The content of CMS blocks are stored in the database table cms_block. There is not backup in the filesystem.
I am using windows7 machine,I would like to know how to transfer the data from local machine to windows2003 server and create directory in to target machine through ant script and batch script..
Most systems have an admin share defined. Your C: drive is located at \\locahost\C$. Replace localhost with the name of your target system.
You should run net use n: \\servername\c$ to establish a connection. If you are not in a domain, you will need to specify username and a password for the connection.
Once you map it, you can treat it like a local drive in your scripts in most situations. Then use whatever tool you are comfortable with to move the files. robocopy is a good one for this.
For example I have a client-server application, this application often gets updated (it's an exe file). If I download the update on the server machine then the same update should be transfered to the client machines, or vice-versa.
At the moment the update is downloaded on all machines individually. My idea is downloading the update should be done only on the server and I'm planning to make an option in the client to copy the *.exe file (update) directly from the server and paste it on the installation path.
How can I make this happen?
NOTE : the update is a self extracting file.
There is already a technology for achieving this called ClickOnce. The client application can be "published" to a share that is accessible to all the clients, then each time the client is executed a version check is done - if a later version is detected on the share then it is downloaded before execution continues.
You can read more about this here: ClickOnce Security and Deployment. Creating a ClickOnce package and publishing it is a feature already built into Visual Studio, so you do not need to write any code.
You have to write an application that will be split into different parts :
Detect file change either on the client or server machine
Perform the copy of the exe file to server or other clients.
In all cases you can't tell an exe file to update it self.
I have a centos Linux machine and a oracle server installed on a server which is located at a remote location.
I have installed oracle client on my Linux centos machine using the link :
How to install SQL * PLUS client in linux
It may be noted that while installing client there was no /network/admin directory and hence no tnsnames.ora file. now I have manually created the directories and have have created a tnsnames.ora file. I am able to connect to remote server.
Now when I look into the bin folder I get just three exe
adrci genezi sqlplus.
I cant find the imp.
Hence when I try to import the dump file from centos to oracle , I get the error:
-bash: imp: command not found
I am using the following command to import dump on oracle server:
imp 'rdsuser#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=oracledbrds.cwuabchlhlu.us-east-2.rds.amazonaws.com)(Port=1521))(CONNECT_DAT A(SID=oracledb)))'
Kindly help
The instant client does not include many of the tools from the full client, including imp/exp, their newer data pump equivalents, SQL*Loader etc. See the instant client FAQ, which highlights that it's largely for distribution with your own applications, but can incude SQL*Plus - the only tool mentioned.
If you need to use export/import or any other tools then you will need to install the full client, or run them on the server; which might be an issue with AWS. Amazon have an article on importing data into Oracle.
Incidentally, you can put your tnsnames.ora file anywhere as long as you set TNS_ADMIN to point to that location, but you aren't referring to it in your imp command anyway - you're specifying all the connection data. If you know the service name, which may be different to the SID (you can run lsnrctl services on the server to find the right value) you can use the 'easy connect' syntax:
sqlplus rdsuser#//oracledbrds.cwuabchlhlu.us-east-2.rds.amazonaws.com:1521/your_service_name