-bash: imp: command not found oracle - sql

I have a centos Linux machine and a oracle server installed on a server which is located at a remote location.
I have installed oracle client on my Linux centos machine using the link :
How to install SQL * PLUS client in linux
It may be noted that while installing client there was no /network/admin directory and hence no tnsnames.ora file. now I have manually created the directories and have have created a tnsnames.ora file. I am able to connect to remote server.
Now when I look into the bin folder I get just three exe
adrci genezi sqlplus.
I cant find the imp.
Hence when I try to import the dump file from centos to oracle , I get the error:
-bash: imp: command not found
I am using the following command to import dump on oracle server:
imp 'rdsuser#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=oracledbrds.cwuabchlhlu.us-east-2.rds.amazonaws.com)(Port=1521))(CONNECT_DAT A(SID=oracledb)))'
Kindly help

The instant client does not include many of the tools from the full client, including imp/exp, their newer data pump equivalents, SQL*Loader etc. See the instant client FAQ, which highlights that it's largely for distribution with your own applications, but can incude SQL*Plus - the only tool mentioned.
If you need to use export/import or any other tools then you will need to install the full client, or run them on the server; which might be an issue with AWS. Amazon have an article on importing data into Oracle.
Incidentally, you can put your tnsnames.ora file anywhere as long as you set TNS_ADMIN to point to that location, but you aren't referring to it in your imp command anyway - you're specifying all the connection data. If you know the service name, which may be different to the SID (you can run lsnrctl services on the server to find the right value) you can use the 'easy connect' syntax:
sqlplus rdsuser#//oracledbrds.cwuabchlhlu.us-east-2.rds.amazonaws.com:1521/your_service_name

Related

Is it possible to edit code on my own machine and save it to account I've ssh'd into?

Scenario:
I'm using ssh to connect to a remote machine. I use the command line and run ssh <pathname>, which connects me to the machine at . I want to edit and run code on that remote machine. So far the only way I know is to create, edit, and run the files in the command window in vi, because my only connection to that machine is that command window.
My Question is:
I'd love to be able to edit my code in VSCode on my own machine, and then use the command line to save that file to the remote machine. Does anyone know if this is possible? I'm using OS X and ssh'ing into a Linux Fedora machine.
Thanks!
Sounds like you're looking for a command like scp. SCP stands for secure copy protocol, and it builds on top of SSH to copy files from one machine to another. So to upload your code to your server, all you'd have to do is do
scp path/to/source.file username#host:path/to/destination.file
EDIT: As #Pam Stums mentioned in a comment below the question, rsync is also a valid solution, and is definitely less tedious if you would like to automatically sync client and server directories.
You could export the directory on the remote machine using nfs or samba and mount it as a share on your local machine and then edit the files locally.
If you're happy using vim, check out netrw (it comes with most vim distributions; :help netrw for details) to let you use macvim locally to edit the remote files.

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

How to transfer the data from windows7 machine to windows 2003 server using ANT script or batch script?

I am using windows7 machine,I would like to know how to transfer the data from local machine to windows2003 server and create directory in to target machine through ant script and batch script..
Most systems have an admin share defined. Your C: drive is located at \\locahost\C$. Replace localhost with the name of your target system.
You should run net use n: \\servername\c$ to establish a connection. If you are not in a domain, you will need to specify username and a password for the connection.
Once you map it, you can treat it like a local drive in your scripts in most situations. Then use whatever tool you are comfortable with to move the files. robocopy is a good one for this.

Getting data from shell script to sql server

I have two servers - one Windows server with SQL Server Express and one Linux server.
On the Linux server I have a shell service which is waiting for a new folder. After something is added it checks if it's OK and after that it should create a new record, for example in table customer it should create a new customer.
I already have the first part but I donĀ“t know how to get the data from the shell script to the SQL Server.
You could follow the steps below
Setup a share on the Windows server accessible to the Linux server
Have your Linux script generate a CSV file of the data to be inserted and push it to the Windows server share via SMB.
Write a Windows batch file or powershell that you setup as a scheduled task on whatever interval you want that iterates over each file in the Windows directory dropped by the Linux process and calls BCP to insert the data.
Move the processed files to an archive directory as part of the windows batch file.
For documentation on using BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx

What is the fastest way to upload the big files to the server

I have got dedicated server and file about 4 GB to upload on the server. What is the fastest and most save way to upload that file to the server?
FTP may create issues if the connection will be broken.
SFTP will have the same issue as well.
Do you have your own computer available through internet public IP as well?
In that case you may try to set up a simple HTTP server (if you have Windows - just set up the IIS) and then use some download manager on dedicated server (depends from OS) to download the file through HTTP (it can use multiple streams for that) or do this through torrent.
There're trackers, like http://openbittorrent.com/, which will allow you to keep the file on your computer and then use some torrent client to upload the file to the dedicated server.
I'm not sure what OS your remote server is running but I would use wget it has a --continue from the man page:
--continue
Continue getting a partially-downloaded file. This is useful when
you want to finish up a download started by a previous instance of
Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
to the length of the local file.
wget binaries are available for GNU/Linux / Windows / MacOSX / dos:
http://wget.addictivecode.org/FrequentlyAskedQuestions?action=show&redirect=Faq#download