How to open/write/read port in REBOL3? - rebol

I have this code in REBOL2:
port: open/direct tcp://localhost:8080
insert port request
result: copy port
close port
What would be equivalent in REBOL3?

REBOL3 networking is async by default, so the code in REBOL3 must look like:
client: open tcp://localhost:8080
client/awake: func [event /local port] [
port: event/port
switch event/type [
lookup [open port]
connect [write port to-binary request]
read [
result: to-string port/data
close port
return true
]
wrote [read event/port]
]
false
]
wait [client 30] ;the number is a timeout in seconds
close client
Based on: http://www.rebol.net/wiki/TCP_Port_Examples
EDIT: above link does not exists anymore, but here is it transferred to GitHub's wiki: https://github.com/revault/rebol-wiki/wiki/TCP-Port-Examples

Related

How to concatenate password with mysql:// protocol in Rebol?

I have a password that contains "]" so rebol doesn't accept mysql://user:password
How to concatenate a string with mysql:// ?
You can use the block form to open the port:
my-database: open [
scheme: 'mysql
host: "localhost"
user: "user"
pass: "pass"
path: "/dbpath"
]
You can examine output from the DECODE-URL function to see how Rebol turns a URL into a port specification:
probe decode-url foo://bar:baz#foobar.qux:999/quux

Fiware Orion context broker subscription notification issue

I'm using orion context broker GE image orion-psb-image-R5.4 version 1.7.0 and I registered a device entity in it , then i implemented in my raspberry pi simple python server script that listens to any incoming message and print it on the Pi's logs . then i sent a subscription message to the context broker to let my raspberry pi subscribe to its corresponding entity in the context broker. The issue is that whenever i update the condition attributes in the entity in the context broker , they're supposed to trigger a notification to the raspberry pi and then the server script in the PI print the notification in the Pi's logs . But what really happens is that the context broker may trigger the notification for several times and then suddenly stops sending any notification when any additional change is applied to the condition attribute , and on every attempt i make i retrieve the subscription status in the context broker and i find that there was a failure stated by the lastfailure attribute giving me the time of my last failed attempt.
I thought the problem could be the connection to my Pi or even in the server script itself but when i launched direct requests from my terminal to the raspberry pi , it prints the all messages immediately even when the update is made from a remote place . So i concluded that the problem is definitely with the context broker and the notification process of the subscription itself .
Here's the subscription request i made :
curl -v contextbrokeraddress:1026/v2/subscriptions -s -S --header "Fiware-Service: XYZ" --header "Fiware-ServicePath: /XYZ" --header 'Content-Type: application/json' \
-d #- <<EOF
{
"description": " Try",
"subject": {
"entities": [
{
"id": "Controller1",
"type": "Controller"
}
],
"condition": {
"attrs": [
"switch",
"datashow"
]
}
},
"notification": {
"http": {
"url": "http://raspberryPiaddress:8080"
},
"attrs": [
"switch",
"datashow"
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
EOF
now when the switch attribute is updated with a different value , it may trigger the notification to the raspberry pi for the first time only but then fails on any following attempts.
this is the simple python script that listens to the incoming notifications and print it in its logs:
import socket
HOST, PORT = '', 8080
listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listen_socket.bind((HOST, PORT))
listen_socket.listen(1)
while True:
print "listening on port 8080"
client_connection, client_address = listen_socket.accept()
print "notified"
request = client_connection.recv(1024)
print request
client_connection.close()
And this is how the notification from the context broker is printed on the Pi's logs in its successful times:
listening on port 8080
notified
POST / HTTP/1.1
User-Agent: orion/1.7.0 libcurl/7.19.7
Host: raspberryPiaddress:8080
fiware-service: XYZ
Fiware-ServicePath: /XYZ
X-Auth-Token: token
Accept: application/json
Content-length: 208
Content-type: application/json; charset=utf-8
Fiware-Correlator: f48ced60-1069-11e7-b743-fa163e7c4daf
Ngsiv2-AttrsFormat: normalized
{"subscriptionId":"58cd49191e9c000de6ea89c7","data":[{"id":"Controller1","type":"Controller","switch":{"type":"command","value":"OFF","metadata":{}},"datashow":{"type":"string","value":"OFF","metadata":{}}}]}
And when the notification is not received on any following updates on the entity , i check that it was failure by retrieving the subscription status which states that a failure happened in the context broker on sending the notification at the exact time of my attempt. This the status i retrieve to check that:
[
{
"description": " Try",
"expires": "2040-01-01T14:00:00.00Z",
"id": "58cd49191e9c000de6ea89c7",
"notification": {
"attrs": [
"switch",
"datashow"
],
"attrsFormat": "normalized",
"http": {
"url": "http://ahmadpi.ddns.net:8080"
},
**"lastFailure": "2017-03-24T08:22:23.00Z",**
"lastNotification": "2017-03-24T08:22:18.00Z",
"lastSuccess": "2017-03-23T22:09:33.00Z",
"timesSent": 66
},
"status": "failed",
"subject": {
"condition": {
"attrs": [
"switch",
"datashow"
]
},
"entities": [
{
"id": "Controller1",
"type": "Controller"
}
]
},
"throttling": 5
}
]
the problem now seems to be relating to the context broker and the way the subscription/notification processes are handled inside it . Now, i want to know whether the problem regards the context broker image version that i used or whether it regards something else . I just want to know where's the problem and how it can be handled please and thanks so much.
Although I'm not fully sure as don't have all the inputs (specially, CB log traces), by "it worked well as I said but sometimes stops for some reasons" (see comments thread in the question post) I tend to think is some networking/connectivity problem, not directly related with Orion Context Broker.

Partial read from urls with READ/PART or READ/SEEK

Partial read via HTTP Range header works fine for me:
rebol []
client: open tcp://www.apache.org/
client/awake: func [event /local port] [
port: event/port
switch event/type [
lookup [open port]
connect [
write port rejoin [
{GET / HTTP/1.1} crlf
{User-Agent: curl/7.26.0} crlf
{Host: www.apache.org} crlf
{Accept: */*} crlf
{Range: bytes=0-9} crlf
crlf
]
]
wrote [read port]
read [
probe to-string port/data
probe length? port/data
clear port/data
]
]
false
]
wait [client 3]
close client
print "Done"
I think I could use READ/PART to do the same thing:
length? read/part http://www.apache.org/ 10 ;40195
length? read http://www.apache.org/ ;40195
but it does't work, still get all the bytes. The same with READ/SEEK.
Why was that?
(By the way, it works in Rebol2.)
You can see from the source
https://github.com/rebol/rebol/blob/master/src/mezz/prot-http.r#L424
that the read actor does not have any refinements defined. This doesn't mean they can't be defined but at present no decision has been made on whether it should be done by using refinements, or by using a query dialect.
You can see by setting trace/net on, that it's faking it in Rebol2
>> trace/net on
>> read/part http://www.apache.org 10
URL Parse: none none www.apache.org none none none
Net-log: ["Opening" "tcp" "for" "HTTP"]
connecting to: www.apache.org
Net-log: {GET / HTTP/1.0
Accept: */*
Connection: close
User-Agent: REBOL View 2.7.8.3.1
Host: www.apache.org
}
Net-log: "HTTP/1.1 200 OK"
Net-log: ["low level read of " 2048 "bytes"]
Net-log: ["low level read of " 2048 "bytes"]
.. many lines removed
Net-log: ["low level read of " 2048 "bytes"]
Net-log: ["low level read of " 2048 "bytes"]
Net-log: ["low level read of " 2048 "bytes"]
== "<!DOCTYPE "
It's a safe guess that it's simply not implemented in current version of HTTP scheme. There are other missing parts also like redirection, so I would assume it's not supported yet.

RabbitMQ and Pika

I'm using python lib pika, fow work with rabbitmq.
RabbitMq runnning and listen 0.0.0.0:5672, I try connect to him from another server, and I get exception:
socket.timeout: timed out
Python code using from official doc RabbitMQ(Hello, World)
I was try disable iptables.
But if I run script with host "localhost", all good work.
My /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0",5672}]}
]}
].
Code:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.10.150', port=5672, virtual_host='/', credentials=pika.credentials.PlainCredentials('user', '123456')))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print " [x] Sent %r" % (message,)
connection.close()
Since you are connecting from another server, you should check your machine`s firewall settings

rebol open has no refinement called async

I tried the example http://www.rebol.net/docs/async-examples.html but it doesn't work.
port-spec: tcp://www.rebol.net:80
http-request: {GET /
User-Agent: REBOL/Core
Connection: close
}
client: context [
data: make binary! 10000
handler: func [port action arg] [
switch action [
read [
append data copy/part port arg
print ["-- read" arg "bytes" length? data "total"]
]
write [print "-- writing (sending)"]
write-done [print "-- done with write"]
close [
print ["-- done with read" length? data]
close port
print ["-- closed port, press RETURN to quit"]
]
init [print "-- port initialized"]
open [print "-- opened" insert port http-request]
address [print ["-- address lookup:" arg]]
error [print ["-- error:" mold disarm :arg] close port]
]
]
]
p: open/direct/binary/async port-spec get in client 'handler
input ; (wait for user console input before closing)
attempt [close p]
The /async was removed a while ago. If you want to use async, you'll have to use Gabriele and others' async protocols.