phpseclib - exec works once, then closes connection - ssh

I am running ssh->exec on a router, with a packethandler.
The first call works fine -- I see results.
The second call never gets any results. In the php log I see the message:
Connection closed prematurely in C:\temp\php\Net\SSH2.php on line 3939
Is there something I need to do to keep the connection open long enough for the second call?
Code fragment:
function packet_handler($str) {
echo "results from remote unit: $str \n";
}
$sCmd="who";
$ssh->setTimeout(3);
$ssh->exec("$sCmd", 'packet_handler'); // I see results for this call
//usleep(3000000);
$ssh->exec("$sCmd", 'packet_handler'); // I never see results for this call

Related

LSP4J : Language Server method call never ends

I have created a Java-based LSP client, but none of the method calls are completed & it waits indefinitely.
Socket socket = new Socket("localhost", 6008);
Launcher<LanguageServer> createClientLauncher = LSPLauncher.createClientLauncher (languageClient,
socket.getInputStream(), socket.getOutputStream());
LanguageServer server = createClientLauncher.getRemoteProxy();
createClientLauncher.startListening();
InitializeResult result = server.initialize(new InitializeParams()).get();
System.out.println("end");
initialize method never returns. The Language Server is working fine when tested with the VSCode instance.
Seems like requests are not reaching the server as nothing is printed in trace logs of server.

A resource failed to call close

I'm getting the android logcat message "A resource failed to call close". I've tracked it down to where that message gets generated. Here's the code:
Properties defaultProperties = new Properties();
URL propURL = Util.class.getClassLoader().getResource(DEFAULT_PROPERTIES_FILE);
if (propURL != null)
{
InputStream is = null;
try
{
// Load properties from URL.
is = propURL.openConnection().getInputStream();
defaultProperties.load(is);
is.close();
}
catch (Exception ex)
{
The message is generated on the call to "defaultProperties.load(is)".
I put a breakpoint on that line, and when I step over that line, the warning message is generated. I'm not the author of the code but that line gets executed at least two times and its the second time when that line gets called when the warning gets generated. I just don't see how under any circumstances that a resource failed to close would be generated on that line. I'm at a lost to explain how or why that error message would be generated there. Any ideas?
After thinking about this, I've come to the conclusion that the problem doesn't have anything to do with the line "defaultProperties.load(is)" causing the warning. Although the message is always generated the second time that line is called, my current thought is that the problem is happening elsewhere but when this line gets called it's probably yielding to some other VM related thread time to process, and that process is detecting that some resource failed to close. I'm concluding that the problem is related to something altogether different and calling that line is the time when the problem surfaces, but it's not what's causing the problem.

WebRtc Native-Crashed when I call peerconnection->Close()

How to close or destruct a PeerConnectionInterface object? It crashed when I'm trying to do so.
I have an object declared like this:
rtc::scoped_refptr<webrtc::PeerConnectionInterface> _peerConnection;
It works fine after I create the PeerConnectionInterface by factory.
However, when the session is over and I try to call _peerConnection->Close(); The program crashed.
And I also try to call _peerConnection.release()->Release(); Crashed as well.
I print logs in PeerConnection.cc which is from the source code of WebRtc, and find that it crashed here, which is in Close() function and ~PeerConnection() function:
webrtc_session_desc_factory_.reset(); //PeerConnection.cc
The declare is
std::unique_ptr<WebRtcSessionDescriptionFactory> webrtc_session_desc_factory_;
So I continue to log in WebRtcSessionDescriptionFactory.cc, the ~WebRtcSessionDescriptionFactory() function. Crashed in this function:FailPendingRequests().
Entered the FailPendingRequests() function:
RTC_DCHECK(signaling_thread_->IsCurrent());
while (!create_session_description_requests_.empty()) {
const CreateSessionDescriptionRequest& request =
create_session_description_requests_.front();
//Crashed here in third or fourth loop
PostCreateSessionDescriptionFailed(request.observer,
((request.type == CreateSessionDescriptionRequest::kOffer) ?
"CreateOffer" : "CreateAnswer") + reason);
create_session_description_requests_.pop();
}
I will be really grateful for any suggestion!
I faced the same issue in iOS when implemented Kurento Library. The key to fix this issue is to dispose the resources in the right manner.
Steps I followed:
The order of creation:
Created WebRTCPeer object
Created RoomClient object
Once RoomClient connected, generated SDP Offer.
and so on.
The order of disposition:
Disconnected RoomClient first.
Kept an eye on "RTCIceConnectionState", "RTCIceGatheringState" in the WebRTC events.
Once "RTCIceConnectionState" is closed and iceGatheringState is "RTCIceGatheringStateComplete", then disposed WebRTCPeer object.
This way the problem got resolved, otherwise resources were initialised and main object were disposed, which results in crashes.
Hope that helps!

Why does my code run 2 different ways with the exact same code?

I have an application that I have started developing that monitors websocket messages from all clients connected to the websocket server by relaying all messages received from the server to this application.
Problem
When I run my program (In visual studio I hit Start), it builds and starts up perfectly, and does most of the functionality the same everytime. However, I have a common occurance of a portion of code that will not run the same. Below is the small snippet of that code.
msg = "set name monitor"
SendMessage2(socket, msg, msg.Length)
msg = "set monitor 1"
SendMessage2(socket, msg, msg.Length)
Console.WriteLine("We are after our second SendMessage2 function")
I know that the two calls to SendMessage2 are always executed because visual studio's debug console will output the following
We are at the end of the SendMessage2 Sub
We are at the end of the SendMessage2 Sub
We are after our second SendMessage2 function
I also know when it executes correctly because my websocket server will either output one of the two blocks
Output when app runs correctly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Message from socket #4: "set monitor 1"
Output when app runs incorrectly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Notice how the second output is missing the second message from the monitor application.
What have I tried
Using a string variable to call the functions
Calling the functions using static string arguments (not using the variable msg)
SyncLocking the functions separately
SyncLocking inside the SendMessage2 function
Reordering the functions (swapping the strings to change behavior)
TL;DR
Why is it that even when I do not change my code, my program will execute two separate ways? Am I doing something incorrectly when calling my SendMessage2 Sub?
I am all out of ideas. I am willing to try any recommendation to fix this problem.
All code can be found on GitHub here
So I figured it out.
It is actually not the VB application that is messing up. Nor was my server. While debugging I was looking at the number of bytes received by my server and I noticed the following:
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 25
Message from socket #4: "set name monitor"
bytes read: 22
Message from socket #4: "set monitor 1"
Ok great we have 25 bytes from set name monitor and 22 bytes from set monitor 1
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 47
Message from socket #4: "set name monitor"
And boom. Both programs were doing their jobs, sending the correct number of bytes every time and reading the correct number. However, the VB application is sending them so quickly back to back that my server was reading all 47 bytes at a time instead of the separate 25 and 22 bytes.
Solution
I solved this problem by implementing a secondary buffer in my server to store off all bytes after the first message should multiple messages by group like this. Now I check if my secondaryBuffer is empty before reading in new bytes.
Here is a portion of the code used to solve the problem
/*Byte Check*/
for (j=0; j < bytes; j++) {
if (j == 0)
continue;
if (readBuffer[j] == '\x81' && readBuffer[j-1] == '\x00' && readBuffer[j-2] == '\x00') {
secondaryBytes = bytes - j;
printf("Potential second message attached to this message\nCopying it to the secondary buffer.\n");
memcpy(secondaryBuffer, readBuffer + j, secondaryBytes);
break;
}//END IF
}//END FOR LOOP
/**/

Go hangs when running tests

I'm writing a web application in go and it runs just fine. However, when running the tests for a package, the go test command just hangs (it does nothing, not even terminate).
I have a function for testing which starts the server:
func mkroutes(t *testing.T, f func()) {
handlerRegistry = handlerList([]handler{})
middlewareRegistry = []middleware{}
if testListener == nil {
_testListener, err := net.Listen("tcp", ":8081")
testListener = _testListener
if err != nil {
fmt.Printf("[Fail] could not start tcp server:\n%s\n", err)
}
}
f()
go func() {
if err := serve(testListener, nil); err != nil {
fmt.Printf("[Fail] the server failed to start:\n%s\n", err)
t.FailNow()
}
}()
}
If I change the port that it listens on, everything runs fine (all the tests fail though, since they can't connect to the server). This shows that the code is indeed running, but if I log something in the function, or even in the init function, while the port is correct, it again breaks.
After I force the go test command to terminate manually, it does print whatever I logged, then exit. This leads me to believe that something else is blocking on the main thread before execution reaches the log, but that's impossible since changing the port makes a difference.
The package doesn't have any init functions and the only code that runs on startup is var sessionStore = sessions.NewCookieStore([]byte("test-key")) which is using the package github.com/gorilla/sessions. When I run the program normally, this causes no problems, and I don't see anything in the package's source that would cause it to behave differently in testing.
That's the only package outside the standard library which is imported.
I can provide any other code in the package, but I have no idea what's relevant.
First: Note that go test will create, compile and run a test program which intercept output from each test so you will not see output from your tests until the test finishes (in this moment the output is printed).
Two issues with your code:
If you cannot start the tcp server t is not notified, you only do Printf here and continue as if everything was fine.
You call t.FailNow from a different goroutine than your test runs in. You must not do this. See http://golang.org/pkg/testing/#T.FailNow
Fixing those might at least show what else goes wrong. Also: Take a look at how package net/http does it's testing.