I try to use ox.graph_ from_ Place to obtain the road network data, but the connection failed - osmnx

I try to use ox.graph_ from_ place to obtain the road network data, but the connection fails

Related

Spark - Failed to load collect frame - "RetryingBlockFetcher - Exception while beginning fetch"

We have a Scala Spark application, that reads something like 70K records from the DB to a data frame, each record has 2 fields.
After reading the data from the DB, we make minor mapping and load this as a broadcast for later usage.
Now, in local environment, there is an exception, timeout from the RetryingBlockFetcher while running the following code:
dataframe.select("id", "mapping_id")
.rdd.map(row => row.getString(0) -> row.getLong(1))
.collectAsMap().toMap
The exception is:
2022-06-06 10:08:13.077 task-result-getter-2 ERROR
org.apache.spark.network.shuffle.RetryingBlockFetcher Exception while
beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to /1.1.1.1:62788
at
org.apache.spark.network.client.
TransportClientFactory.createClient(Transpor .tClientFactory.java:253)
at
org.apache.spark.network.client.
TransportClientFactory.createClient(TransportClientFactory.java:195)
at
org.apache.spark.network.netty.
NettyBlockTransferService$$anon$2.
createAndStart(NettyBlockTransferService.scala:122)
In the local environment, I simply create the spark session with local "spark.master"
When I limit the max of records to 20K, it works well.
Can you please help? maybe I need to configure something in my local environment in order that the original code will work properly?
Update:
I tried to change a lot of Spark-related configurations in my local environment, both memory, a number of executors, timeout-related settings, and more, but nothing helped! I just got the timeout after more time...
I realized that the data frame that I'm reading from the DB has 1 partition of 62K records, while trying to repartition with 2 or more partitions the process worked correctly and I managed to map and collect as needed.
Any idea why this solves the issue? Is there a configuration in the spark that can solve this instead of repartition?
Thanks!

Modem config match

I have two similar modems, when I insert the SIM in the first modem it connects automatically to the network. But if I insert the same SIM in the second modem, it doesn't connect to the network.
I launched the command: AT&V to read the profile of each modem. I compared the settings and they are all the same except for the following:
+CGDCONT: (1,"IP","cmnet","0.0.0.0",0,0)
+CGDCONT: (1,"IP","internet","0.0.0.0",0,0)
----------------------
+CIND: 0,3,1,0,0,0,1,0
+CIND: 0,0,0,0,0,0,0,0
----------------------
+CGATT: 1
+CGATT: 0
----------------------
+COPS: 1,0,""
+COPS: 0,2,""
----------------------
Q1: Could one of these settings cause the problem?
Q2: Is there a way to save/restore a modem config?
NB. The first setting of each pair is of the working modem.
Looks like the APN of the second modem is different from the first one. The APN of second modem is "internet", while the first one is "cmnet". This can cause the problem (first one is attached while the second one did not: +CGATT 1 vs 0), if the network does not support "internet" APN.
You can set same APN for the second modem as the first one to have a try. i.e.
AT+CGDCONT =1,"IP","cmnet"
But, APN difference is only one of the possible reasons. For analyzing the actual reason of attach failure, logs are needed.

WPA_supplicant authentication implementation

I need help from someone that have some experience in playing with wpa_supplicant code.
What i understand is that wpa_supplicant dose everything in order for a supplicant to connect to an AP (if that what you what). Hence the steps are as:
Scan
Get scan results
AUTH
ASSOC
4-hand shake
data exchange
As i understand this then the first 4 steps are only managed by wpa_supplicant. That is, wpa_supplicant simply calls the under laying driver to perform these steps and after the main event loop receives the EVENT_ASSOC msg. it starts the 4-handshake.
For my part, it is fine with the first two steps are carried out at the driver, ie., wpa_supplicant send a scan req, the driver perform the scan and feed the scan results.
My question is, is it correct that wpa_supplicant cannot generate the necessary packet and use, e.g., layer 2 (rawsocket) to send authentication request to the AP ? and followed by an associate request ?... shall one simply provides these as a handle from the driver layer ?
as i can see from the code in wpa_supplicant.c
(void wpa_supplicant_associate(struct wpa_supplicant *wpa_s,
struct wpa_bss *bss, struct wpa_ssid *ssid))
that this function calls a function pointer to the selected driver eg. ".associate = wpa_driver_nl80211_associate" and here the driver then send this down to the udnerlaying nl80211 driver code ? .... so wpa_supplicant can not generate these packet by it self ?
I hope that this make any sens, if not please ask :)
Yes, your understanding is correct. To send auth/assoc req, the wpa_supplicant should construct the corresponding NL80211 commands in following different scenarios:
a) in case the SME is maintained in wpa_supplicant
NL80211_CMD_AUTHENTICATE
NL80211_CMD_ASSOCIATE
b) in case the SME is maintained by driver
NL80211_CMD_CONNECT
And these commands will trigger the corresponding cfg80211_ops hooks (.auth, .assoc, .connect) registered by the wifi driver to be called to construct the frames and then send out the frames.

SSHLibrary Retry connection to host

I am using SSHLibrary 2.0 for Robot. I am trying to open connect to a host using private key, but sometime (not always) the connection does not establish.
Sample code below:
index = self.SSHLibrary.open_connection(host)
self.SSHLibrary.login_with_public_key(username,passkey, password`)
Is there a way to force a connection retry at least one more time?
You can use the keyword Wait until keyword succeeds, which will retry a keyword several times until it succeeds or times out.

How to set read timeout for ftp control connection

I am using ftp apache's commomns net version 3.1 .
The ftp connection gets in hung state while doing listing operation INTERMITTENTLY .
The reason i feel so seems to be ftp client is kept waiting indefnitely for server response for FTP command PASV while trying to open data connection for listing operation.
How do i need to set read timeout on control connection to avoid this situation.
I have set readtimeout on data connection using setDataTimeout().
For more refer :
http://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html#setDataTimeout(int)
1)Does setting setsoTimeout() after doing ftp connect() operation helps avoiding this situation on control connection?
For more refer :
http://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/SocketClient.html#setSoTimeout(int)
2)If so,what is the optimum timeout value i need to set for setsotimeout() ?
Please find stack trace below:
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:140)
at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
at java.io.InputStreamReader.read(InputStreamReader.java:188)
at java.io.BufferedReader.fill(BufferedReader.java:147)
at java.io.BufferedReader.read(BufferedReader.java:168)
at org.apache.commons.net.io.CRLFLineReader.readLine(CRLFLineReader.java:58
)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:310)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479)
at org.apache.commons.net.ftp.FTPClient.openDataConnection(FTPClient.java:7
69)
at org.apache.commons.net.ftp.FTPClient.openDataConnection(FTPClient.java:6
57)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:
3097)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:
3072)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:
2972
Any help on this will be appreciated:)
Thanks.