Mercurial hg no suitable response from remote hg error - ssh

Trying setup mercurial SVM on my windows server (2008 RC) from last couple of hours. I am stuck on this error when I try to clone my repo from the client machine.
Error: no suitable response from remote hg
The server that I am running has SSH access (SSH running on port 1667). I also have a remote access to it.
I tried to clone using command as well as with the help of tortoisehg gui client. Commands I tried is:
hg clone ssh://myuser#myremoteip:1667//D:/Mercurial Projects/testproj E:\Mercurial\testproj-clone
hg clone --remotecmd D:/Program Files/TortoiseHg/hg --verbose -- ssh://myuser#myremoteip:1667//D:/Mercurial Projects/testproj E:\Mercurial\testproj-clone
but no success so far.
I also added following line in global setting at client side to give remote path of hg on server but no luck:
[ui]
remotecmd = D:/Program Files/TortoiseHg/hg
Please help me...

I had a similar problem and in my case it was that the computer had both TortoiseSVN and TortoiseHG installed. Both TortoiseHG and TortoiseSVN have a command TortoisePlink.exe that they use. However, due to the PATH, TortoiseHG was using TortoiseSVN's TortoisePlink.exe.
Uninstalling TortoiseSVN solved the problem for me.
You may open a "cmd" window and type:
where TortoisePlink.exe
to check what TortoisePlink.exe is used.

I think the problem was that my Python version was older than the one I needed. I was trying to set it up with Python 2.6. I followed another tutorial with Python 2.7 and latest Mercurial version (2.8.1)
Anyone with Windows Server 2008 and IIS 7+ should follow this tutorial.

I run into this problem after updating TortoiseHg. It turned out the location of TortoisePlink.exe has changed. I had it set explicitly to C:\Program Files\TortoiseHg\TortoisePlink.exe in mercurial.ini and I had to change it to C:\Program Files\TortoiseHg\lib\TortoisePlink.exe.

Related

Can't connect VS Code to Linux machine for remote development

I am getting this error on VS Code and have no clue why it fails
[15:14:59.543] Log Level: 2
[15:14:59.555] remote-ssh#0.51.0
[15:14:59.555] win32 x64
[15:14:59.560] SSH Resolver called for "ssh-remote+xx.xx.xx.xx", attempt 1
[15:14:59.561] SSH Resolver called for host: xx.xx.xx.xx
[15:14:59.561] Setting up SSH remote "xx.xx.xx.xx"
[15:14:59.621] Using commit id "0ba0ca52957102ca3527cf479571617f0de6ed50" and quality "stable" for server
[15:14:59.624] Install and start server if needed
[15:15:01.964] getPlatformForHost was canceled
[15:15:01.965] Resolver error: Connecting was canceled
[15:15:01.973] ------
Add one key in your settings.json as below. Please remember to replace the $remote_server_name to yours.
"remote.SSH.remotePlatform": {
"$remote_server_name": "linux"
}
Menu: File->Preference->Settings
Or click the icon to open settings.json:
In dialog box where you have typed user#host type/select Linux/Windows/etc. depends what you are using, then type/select Continue, then type password for remote session.
For those getting this error on Windows: Check if you have multiple ssh clients installed.
How I solved it was by adding my ssh-configuration to ALL ssh-config files.
In my case I had one in
C:\Users\USER_NAME.ssh\config (this is the one that the remote extension used to give me connection options)
and another in C:\Program Data\ssh\ssh-config
After adding my ssh-config setting to both I got the prompt to select virtual hosts' OS. Tried editing the settings.json file directly, but I think it gets confused because of the multiple ssh-configurations.
P.S.
Tested it for both private key and password enabled connections and it work with either.
I got a similar problem, but the error logs were bigger. Before that, I deleted the python and reinstalled it. Perhaps this led to the problem. Just reinstalled "Remote -SSH" extension in vscode and it worked for me.
In my case there were two files that look like
vscode-remote-lock.<user>.<xxx>
vscode-remote-lock.<user>.<xxx>.target
where was my remote user name and xxx the VS Code Remote Server build hash.
These two files on the remote server in the folder.
/run/user/1000/
I deleted both files and then VS Code came up right away. I have encountered this a few times now. VS Code Remote Server install is not very robust. I use it on about 7 remote machines and every once in a while something goes awry and it cannot recover from simple errors and gets stuck in installation loops.
This trick only works if there is a valid ~/.vscode-server on the remote machine with a hash that matches your local VS Code installation.
If you got here because you were trying to install VS Code in the first place and for whatever reason VS Code had issues with the remote installation, I highly recommend installing it manually by downloading and extracting the tar file to the remote machine directly.
I have tried playing with the setting "Use remote.SSH: Use Flock" and other tricks posted on StackOverflow but none of these work for me whenever I have remote installation issues. I cannot figure out why on some machines, a smooth remote installation is not possible. Even when all of my ssh keys and remote ids have been copied and tested from both the Windows command line and inside a WSL Ubuntu instance.
If VS Code Remote Server installation had slightly better error logic and better error messages none of us would be wasting hours doing this simple task.
I was getting the exact same error as the original poster received and yet none of the other answers were my issue.

Cannot install Kaltura oflaDemo on CentOS7

I'm currently setting up a Kaltura streaming server on CentOS 7 with mariaDB. When I come to the point the installation manual requires me to install oflaDemo via browser, I only get an empty list. No connection errors occour. The debug output states:
Host: vstream-dev.my.domain
Trying to connect
Net status: NetConnection.Connect.Success
Got the application list
Got the application list
Got the application list
So, in theory there shouldn't be a problem.
Firewall is down for testing/devel
SELinux is off (permissive)
The only error that ocoured during the installation process was packet mysql-server is not installed. But the manual states that I should use mariaDB on CentOS 7.
I tried to clone https://github.com/Red5/red5-examples and link the ofla Demo folder to /usr/lib/red5/webapps/ with no success.
Ok, I solved it.
What I did:
I cloned the repo with the red5 examples: https://github.com/Red5/red5-examples and navigated into the subfolder oflaDemo (with pom.xml).
Then I had to install maven with
yum install -y maven
and do a maven build
mvn clean install
After that, I was able to grab the file target/red5-example-oflaDemo-2.0.war. I extracted this file into a folder oflaDemo in /var/lib/red5/webapps and restarted the server. Finally, I did mkdir /usr/lib/red5/webapps/oflaDemo/streams to create a folder for the streams.
After that, I was able to navigate to the demo via
http://my.domain:5080/oflaDemo/

Why the installing process of R package "RODBC" in "R CMD INSTALL" can't find ODBC driver manager?

I am trying to connect to an Vertica DB from R using "RODBC" package. Also, the machine I am using is an remote server which doesn't have direct internet access so I basically "transfer" all source files from my local to the remote server to build the system. So, in order to give you an clear context, I am listing all my steps in attending of installing "RODBC" package below:
Step1 - I downloaded the RODBC_1.3-13.tar.gz source file for RODBC and then tried to directly install it with "R CMD INSTALL". However, I encountered error as "ODBC headers sql.h and sqlext.h not found".
Step2 - After a few researches, I found that the installation of "unixodbc-dev" would potentially solve this issue. Therefore, I downloaded all needed dependencies for "unixodbc-dev" and transferred them to the server. As you can see the list:
Therefore, I also successfully installed "unixodbc-dev":
However, another error message appears when I tried to re-install the "RODBC" using "sudo R CMD INSTALL /home/mli/RODBC_1.3-13.tar.gz" in which it returns error "no ODBC driver manager found":
As the message indicates, the installation program can't locate my ODBC driver manager. So, I downloaded "vertica-client-7.2.3-0.x86_64.tar.gz" and unzipped it on the server:
So, now my question is how can I customize the "R CMD INSTALL" command say, using some parameter handles to direct the installation program to locate the driver manager? Or am I trying this in a right direction? Please let me know. Any help would be really appreciated!!! :)
ADDITION:
I have also tried it with JDBC in which the I successfully loaded the "RJDBC" package in R and used the JDBC driver from vertica-client-7.2.3-0.x86_64.tar.gz. Also, I have already had "rJava" installed. However, I have still got an error when I tried to make the connection. I am listing my result below:
I successfully installed the "RJDBC" with "$R CMD INSTALL RJDBC_0.2-5.tar.gz --library=/usr/local/lib/R/site-library/" and then I tried the following scripts in R. All the lines are successfully executed except on the line 16:
Based on the error message, I assumed the version of the JDBC driver that I was using is too new for the Vertica server. So, I was trying to use an older version JDBC driver instead, like the "vertica-jdk5-6.1.0-0.jar" which I have downloaded from this link:http://www.java2s.com/Code/Jar/v/Downloadverticajdk56100jar.htm
So, I moved the file "vertica-jdk5-6.1.0-0.jar" to my home directory on the server and then changed the JDBC driver path in the R script:
As you can see, it still returns error "FATAL: Unsupported frontend protocol 3.6: server supports 3.0 to 3.5". Am I doing it right? Or is there an issue with the new driver that I downloaded? How can make it works? Please, any help will be really appreciated! Thanks!!!
A few things:
First, just do sudo apt-get install r-cran-rodbc. The package was created (by yours truly) in no small part because dealing with unixODBC or iODBC is not fun. But even once you have that, you still need the ODBC driver for Linux from Vertica. And that part is filly.
Second, I just did something similar the other day but just used JDBC, which worked. You do of course need sudo apt-get install r-cran-rjava which has its own can of worms (but I already mentioned Java...) Still, maybe try that instead?
Third, you can cheat and just use psql pointed to the Vertica port (usually one above the PostgreSQL port).

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

The local psql command could not be located

I'm following the instructions found here.
When I try to run $ heroku pg:psql or $ heroku pg:psql HEROKU POSTGRESQL_BROWN I recieve the following error message:
! The local psql command could not be located ! For help
installing psql, see local-postgresql
I can't find anything useful on the link it gives me (it just links to the instructions I was already using, but further down the page) nor can I find this error anywhere else.
If I've missed anything you need to know to answer this, just let me know. I'm rather new to all this and teaching myself as I go.
I had same error even after installing Postgres locally.
But after seeing this
I saw that "pqsl" was not in the PATH so I then did
PATH=%PATH%;C:\Program Files\PostgreSQL\9.2\bin
which worked for me
I have since solved this myself. When I ran heroku pg:info it says the version number is 9.1.8, I was locally running 9.2
installing 9.1.8 and ensuring Path pointed to the appropriate folder solved the problem.
After you change the path, make sure to restart the terminal!
Set the PATH. To find out the PATH of your psql script (on mac) open the sql shell script from your finder in Applications/Postgres installation. This will give you a hint as to where it is installed. That opened a window which told me it is located here: /Library/PostgreSQL/8.4/scripts/runpsql.sh
Then, I set the PATH variable from the terminal window by typing:
$ PATH="/Library/PostgreSQL/8.4/bin:$PATH"
(depends on the location of your PostgreSQL installation, find your bin path first, another exp: /usr/local/Cellar/postgresql#9.6/9.6.8/bin)
OR.....
You can also connect to the shell by opening the shell directly from your postgres installation folder. Then enter the credentials. If you don't know the credentials, here is how to find them out:
$ heroku pg:info
=== HEROKU_POSTGRESQL_RED_URL (DATABASE_URL)
$ heroku pg:credentials HEROKU_POSTGRESQL_RED_URL
Top answer wouldn't work for me oddly, my system would not add the Path via cmd with administrator access (Not sure why).
So check this > Windows key > environment variables > system variables
And add the last line (your version may differ in the path)
Make sure you've installed the toolbelt as psql is installed by default.
However you also need to ensure you've installed a local copy of PostgreSQL; if you don't the toolbelt will be unable to find the native psql client.
Assuming you have installed a local copy of PostgreSQL, make sure you can execute psql from the command line directly (i.e make sure you PATH is set correctly ). If the command does not execute, check your PATH, if it does execute see if you can connect via the PSQL connection string provided in the Heroku control panel. If you can connect reinstall the toolbelt, if you are unable to connect provision another dev database and try again.
If there are still issues, I would suggest contacting Heroku support for assistance after verifying no API issues are listed on the status page located here.
I got rid if this annoying message on Windows by adding a path element without the spaces, i.e.
C:\Progra~1\PostgreSQL\9.4\data
instead of
“C:\Program Files\PostgreSQL\9.4\data”
I followed the instructions here: http://www.computerhope.com/issues/ch000549.htm, which worked for me if you prefer to go the point-and-click configuration of the PATH variable.
This type of error usually appears in the Windows environment, because if you do not update the PATH after installing Postgresql, heroku pg:psql command does not work.
So you need to update your PATH environment variable to add the bin directory of your Postgres installation. The directory will look like this:
C:\Program Files\PostgreSQL\<VERSION>\bin.
For more information, go to the Heroku in Local setup website:
heroku-postgresql: Local setup
I had the same problem and discovered that Heroku doesn't seem to provision the latest version of PostgreSQL by default. Where the Heroku Getting Started instructions said
heroku addons:create heroku-postgresql:hobby-dev
That provisioned a v10 database for some reason (which you can check by clicking on Heroku Postgres in the Add-ons tab of your dashboard). I deleted that database and provisioned a new database using the --version flag:
heroku addons:create heroku-postgresql:hobby-dev --version 11
As of now, at least, you can find the latest version of Postgres supported by Heroku at this link: https://devcenter.heroku.com/articles/heroku-postgresql#version-support-and-legacy-infrastructure
I'm writing this in early 2019, but according to the PostgreSQL website the next version (12) is "tentatively scheduled" for third quarter of 2019 so if you're reading this in late 2019 potentially the same problem will come up for v12 instead
On Mac you can use the following:
export PATH="/Library/PostgreSQL/12/bin/:$PATH"
The only solution that I found on Windows:
go to advanced system settings
go to environment variables
select Path variable and click Edit
add a new line and enter your bin directory path (C:\Program Files\PostgreSQL<version>\bin) and click ok
restart your terminal
enter your psql command (heroku pg:psql)