tensorflow serving: failed to connect to 'ipv4:127.0.0.1:9000' - tensorflow

I have installed and configured tensorflow serving on an "AWS t2.large Ubuntu 14.04" server.
When I attempt to test the server with the mnist_client utility by executing the command, bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000, I receive the following error message:
E0605 05:03:54.617558520 1232 tcp_client_posix.c:191] failed to connect to 'ipv4:127.0.0.1:9000': timeout occurred
Any idea how to fix this?

I haven't heard of anything like this, but did note that (at least with other test clients) when the server was not ready/up yet, requests would timeout. So my guess is that the server is not up yet or perhaps in some other bad state.

I met the same problem before. The root cause is mnist_client was run in my local machine instead of the server, because the command connects to localhost bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
It succeeds when I run mnist_client utility in the server.

Related

Why does Azure CosmosDB emulator intermittently fail to respond?

I am developing a web server to be deployed on Azure and in the process I am using the CosmosDB emulator, locally installed on my Windows 10 machine. The problem is that the database seems to refuse to connect intermittently - let me stress this, it sometimes responds and most of the times it does not. Why is this so and what can I do to get around the problem?
Docker is not used, for server nor client.
OS is Windows 10.
I am using CosmosDB's MongoDB API.
I am using pymongo to connect synchronously (in scripts), and motor to connect asynchronously (for the web server).
Both with pymongo and motor I experience the same problem, namely the error output
pymongo.errors.ServerSelectionTimeoutError: localhost:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1129), Timeout: 100.0s, Topology Description: <TopologyDescription id: 63aaec350ac3d3aa70c8bcf7, topology_type: Unknown, servers: [<ServerDescription ('localhost', 10255) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1129)')>]>
which occurs intermittently, ie not all the time, but most of the times.
As far as I can see, the pattern is that the failure occurs for a stretch of time, and then for a stretch of time it works without error.
I am thinking that this cannot be a certificate error, because the connection sometimes does work (and because everything is on the same machine). AFAIU it looks like a timeout error, but I have tried timeouts up to 200 seconds.
Code:
An example snippet of a data-import script
with open(data_filename, 'r') as infile:
new_items = json.load(infile)
CONNECTION_STRING = os.environ.get('MONGODB_URL')
client = pymongo.MongoClient(CONNECTION_STRING)
db = client[constants.DB_NAME]
collection = db[constants.COLLECTION_CONTACTS]
with pymongo.timeout(100):
for item in new_items:
existing_items = collection.find(filter={'name': item['name']})
if existing_items:
print(f'Item exists for {existing_items[0]["name"]} with id: {[i["_id"] for i in existing_items]}')
# continue
Other actions, such as retrieving data with motor, or listing databases with client.list_database_names(), will fail similarly.
Client issues such as these are nearly impossible to troubleshoot on a forum because of all the variables within users' PCs. The documentation for the emulator recommends that for connectivity issues, users collect trace files and open a support ticket.
You can learn more at Troubleshoot issues when using the Azure Cosmos DB Emulator

Datastax cassandra cpp_driver hangs when connecting to node

I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.
I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.

Connect via ssh to CF - Error

I need to debug my application , we are using version 2.65 (Diego)
.
I use the following wiki
http://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
while running cf ssh myapp via cli
nothing happens , what should I do in order
1. To see the container FS
2. To be able to debug it ?
The application was deployed successfully to CF.
Im using nodejs app.
all other commands are working well.
When I run the command cf ssh myapp I got this error after two minutes :
FAILED
Error opening SSH connection: dial tcp 52.23.201.1:2277: getsockopt: operation timed out
It sounds like a platform issue using non-standard ssh port.
You can find a bit more ssh access manual steps/troubleshooting tips on https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
If you believe it is a instance issue, you can download a copy of droplet/fs using api, more on https://apidocs.cloudfoundry.org/213/apps/downloads_the_staged_droplet_for_an_app.html

heroku "initialize" operation time out error

I'm using rails 3.1.1 and I've been trying to run command as console and rake but I faced with an Error like
Running console attached to terminal...
/Users/ender/.rvm/gems/ruby-1.9.2-p290/gems/
heroku-2.17.0/lib/heroku/client/
rendezvous.rb:33:in `initialize':
Operation timed out - connect(2) (Errno::ETIMEDOUT)
What's the problem? Anyone knows?
There's a few troubleshooting points at http://devcenter.heroku.com/articles/oneoff-admin-ps#troubleshooting - typically it's a problem connecting on port 5000 if you're behind a firewall.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.