I'm trying to run the sample program from this RedisLabs page.
I chose Option A - which was to set up the free Redis cloud server.
(Seems like if you install manually, then you have to add the JSON as a plugin.)
I'm able to connect and use other "set" commands, but getting error on JSON:
File "C:\Users\nwalt\.virtualenvs\TDAmeritradeGetQuotes\lib\site-packages\redis\client.py", line 901, in execute_command
return self.parse_response(conn, command_name, **options)
File "C:\Users\nwalt\.virtualenvs\TDAmeritradeGetQuotes\lib\site-packages\redis\client.py", line 915, in parse_response
response = connection.read_response()
File "C:\Users\nwalt\.virtualenvs\TDAmeritradeGetQuotes\lib\site-packages\redis\connection.py", line 756, in read_response
raise response
redis.exceptions.ResponseError: unknown command 'JSON.SET'
My Python test program (except put in the sample endpoint before posting):
import redis
import json
import pprint
host_info = "redis.us-east-1-1.ec2.cloud.redislabs.com"
redisObj = redis.Redis(host=host_info, port=18274, password='xxx')
print ("Normal call to Redis")
redisObj.set('foo', 'bar')
value = redisObj.get('foo')
print(value)
capitals = {
"Lebanon": "Beirut",
"Norway": "Oslo",
"France": "Paris"
}
print ("capitals - before call to Redis")
pprint.pprint(capitals)
print("JSON call to Redis")
redisObj.execute_command('JSON.SET', 'doc', '.', json.dumps(capitals))
print("Data Saved, now fetch data back from redis")
reply = json.loads(redisObj.execute_command('JSON.GET', 'doc'))
print("reply from Redis get")
pprint.pprint(reply)
This is the screen shot from their website where I created the database. I didn't see any option to enable JSON or add any modules.
Not sure this was available when I created the REDIS database, but it is now. When you create it on redislabs.com, you can turn on the modules, and pick one from the list.
Then use this library: "rejson" from https://pypi.org/project/rejson/ to get the method "jsonset" method, using such code such as this:
rj = Client(host=config_dict['REDIS_CONFIG_HOST'], port=config_dict['REDIS_CONFIG_PORT'], password=config_dict['REDIS_CONFIG_PASSWORD'], decode_responses=True)
out_doc = {}
out_doc['firstname'] = "John"
out_doc['lastname'] = "Doe"
rj.jsonset('config', Path.rootPath(), out_doc)
get_doc = rj.jsonget('config', Path.rootPath())
pprint.pprint(get_doc)
I'm not used the cloud redis, in my local the Python don't load the JSON.SET
I just so make done, in this sample https://onelinerhub.com/python-redis/save-json-to-redis
Related
I have an API which gets the success or error message on console.I am new to python and trying to read the response. Google throws so many examples to use subprocess but I dont want to run,call any command or sub process. I just want to read the output after below API call.
This is the response in console when success
17:50:52 | Logged in!!
This is the github link for the sdk and documentation
https://github.com/5paisa/py5paisa
This is the code
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
client.login()
In general it is bad practice to get a value from STDOUT. There are some ways but it's pretty tricky (it's not made for it). And the problem doesn't come from you but from the API which is wrongly designed, it should return a value e.g. True or False (at least) to tell you if you logged in, and they don't do it.
So, according to their documentation it is not possible to know if you're logged in, but you may be able to see if you're logged in by checking the attribute client_code in the client object.
If client.client_code is equal to something then it should be logged in and if it is equal to something else then not. You can try comparing it's value when you successfully login or when it fails (wrong credential for instance). Then you can put a condition : if it is None or False or 0 (you will have to see this by yourself) then it is failed.
Can you try doing the following with a successful and failed login:
client.login()
print(client.client_code)
Source of the API:
# Login function :
# (...)
message = res["body"]["Message"]
if message == "":
log_response("Logged in!!")
else:
log_response(message)
self._set_client_code(res["body"]["ClientCode"])
# (...)
# _set_client_code function :
def _set_client_code(self, client_code):
try:
self.client_code = client_code # <<<< That's what we want
except Exception as e:
log_response(e)
Since this questions asks how to capture "stdout" one way you can accomplish this is to intercept the log message before it hits stdout.
The minimum code to capture a log message within a Python script looks this:
#!/usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
class RequestHandler(logging.Handler):
def emit(self, record):
if record.getMessage().startswith("Hello"):
print("hello detected")
handler = RequestHandler()
logger.addHandler(handler)
logger.warning("Hello world")
Putting it all together you may be able to do something like this:
import logging
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
class PaisaClient(logging.Handler):
def __init__():
self.loggedin = False # this is the variable we can use to see if we are "logged in"
def emit(self, record):
if record.getMessage().startswith("Logged in!!")
self.loggedin = True
def login():
client.login()
logging.getLogger(py5paisa) # get the logger for the py5paisa library
# tutorial here: https://betterstack.com/community/questions/how-to-disable-logging-from-python-request-library/
logging.basicConfig(handlers=[PaisaClient()], level=0, force=True)
c = PaisaClient()
c.login()
I am compressing a string using zlib, then storing in Aerospike bin. On retrieval and decompressing, I am getting "zlib.error: Error -5 while decompressing data: incomplete or truncated stream"
When I compared original compressed data and retrieved compressed data, some thing is missing at the end in retrieved data.
I am using Aerospike 3.7.3 & python client 2.0.1
Please help
Thanks
Update: Tried using bz2. Throws ValueError: couldn't find end of stream while retrieve and decompress. Looks like aerospike is stripping of the last byte or something else from the blob.
Update: Posting the code
import aerospike
import bz2
config = {
'hosts': [
( '127.0.0.1', 3000 )
],
'policies': {
'timeout': 1000 # milliseconds
}
}
client = aerospike.client(config)
client.connect()
content = "An Aerospike Query"
content_bz2 = bz2.compress(content)
key = ('benchmark', 'myset', 55)
#client.put(key, {'bin0':content_bz2})
(key, meta, bins) = client.get(key)
print bz2.decompress(bins['bin0'])
Getting Following Error:
Traceback (most recent call last):
File "asread.py", line 22, in <module>
print bz2.decompress(bins['bin0'])
ValueError: couldn't find end of stream
The bz.compress method returns a string, and the client sees that type and tries to convert it to the server's as_str type. If it runs into a \0 in an unexpected position it will truncate the string, causing your error.
Instead, make sure to cast binary data to a bytearray, which the client converts to the server's as_bytes type. On the read operation, bz.decompress will work with the bytearray data and give you back the original string.
from __future__ import print_function
import aerospike
import bz2
config = {'hosts': [( '33.33.33.91', 3000 )]}
client = aerospike.client(config)
client.connect()
content = "An Aerospike Query"
content_bz2 = bytearray(bz2.compress(content))
key = ('test', 'bytesss', 1)
client.put(key, {'bin0':content_bz2})
(key, meta, bins) = client.get(key)
print(type(bins['bin0']))
bin0 = bz2.decompress(bins['bin0'])
print(type(bin0))
print(bin0)
Gives back
<type 'bytearray'>
<type 'str'>
An Aerospike Query
I just started using Spyne and tried to use a ComplexModel as a parameter for one method. I mostly followed the user_manager example from the sources with spyne<2.99 but I always get a type error when doing the client.factory.create() call.
Example code that fails:
from spyne.application import Application
from spyne.decorator import rpc
from spyne.service import ServiceBase
from spyne.protocol.soap import Soap11
from spyne.model.primitive import String, Integer
from spyne.model.complex import ComplexModel
class DatosFac(ComplexModel):
__namespace__ = 'facturamanager.datosfac'
numero = String(pattern=r'[A-Z]/[0-9]+')
class FacturaService(ServiceBase):
#rpc(String, DatosFac, _returns=Integer)
def updateFacData(self, numero, data):
# do stuff
return 1
application = Application([FacturaService], 'facturaManager.service',
in_protocol=Soap11(validator='lxml'),
out_protocol=Soap11()
)
from spyne.server.null import NullServer
s = NullServer(application)
data = s.factory.create('DatosFac')
If you run this code you get:
Traceback (most recent call last):
File "spyner.py", line 25, in <module>
data = s.factory.create('DatosFac')
File "/Users/marc/.pyEnvs/default/lib/python2.7/site-packages/spyne/client/_base.py", line 30, in create
return self.__app.interface.get_class_instance(object_name)
File "/Users/marc/.pyEnvs/default/lib/python2.7/site-packages/spyne/interface/_base.py", line 114, in get_class_instance
return self.classes[key]()
KeyError: 'DatosFac'
(I used NullServer to make it easier to reproduce, but the same happens over Soap+Wsgi).
I amb pretty much stuck at this as I don't see what's essentialy different from this code and the user_manager examples.
What am I doing wrong?
thanks,
marc
Thanks for providing a fully working example.
The difference is that tns and the namespace of the DatosFac are different.
Either do:
data = s.factory.create('{facturamanager.datosfac}DatosFac')
or remove __namespace__ from DatosFac definition
I have a small web server written using Twisted. One of the things I want to do is have it return a result from another web server as the response to loading a page. That is, the response to render_GET() at server A (via http://A.com/resource) should be the content of a different URL at server B (via http://B.com/resource2). The content returned by server B is dynamic, so I can't just cache it.
Right now, server A can render pages just fine, it just can't render this remote resource. I've tried with Agent(), but I can't seem to get the response from B let alone forward it to A. I know that somewhere I have to take that request from the render_GET and later write() and finish() it. That's done in the cbBody callback, which get called but can't get to the original request to populate it.
Here's a piece of the code for server A's resource handler:
def render_GET(self,request):
# try with canned content just to test the whole thing
bmpServer = BMPServer(ServerBURL,
"xyzzy",
"plugh")
d= bmpServer.postNotification({"a":123},request)
print "Deferred", d
return NOT_DONE_YET
And this is the other code at server A:
theRequest = None
def cbRequest(response,args):
print "response called"
print response
print args
print 'Response version:', response.version
print 'Response code:', response.code
print 'Response phrase:', response.phrase
print 'Response headers:'
print pformat(list(response.headers.getAllRawHeaders()))
d = readBody(response)
d.addCallback(cbBody)
return d
def cbBody(body):
print "Response body:"
print body
theRequest.write(body)
theRequest.finish()
theRequest = None
def cbError(failure):
print type(failure.value), failure # catch error here
print failure.value.reasons[0].printTraceback()
class BMPServer(object):
def __init__(self,url,arg1,arg2):
self.url = url
self.arg1 = arg1
self.arg2 = arg2
def postNotification(self,message,request):
theRequest = request
bmpMessage = {'arg1':self.token,
'arg2':self.appID,
'message':message}
print "Sending request to %s"%self.url
print "Create agent"
agent = Agent(reactor)
print "create request deferred"
print "url = %s" % self.url
d = agent.request('POST', self.url,
Headers({'User-Agent': ['Twisted Web Client Example']}),
MessageProducer(bmpMessage))
print "adding callback"
d.addCallbacks(cbRequest,cbError)
print "returning deferred"
return d
When I run this as a standalone code (outside of the resource, using react() for example), it works fine. However, when I try to include it as shown above it just never seems to receive the data. I've got WireShark running so I can see the response is being returned from Server B, but the data never shows up in cbRequest().
For example, here's the output I see:
Sending request to http://localhost:8888/postMGCMNotificationService
Create agent
create request deferred
url = http://serverB:8888/postService
Message producer: body = {"arg2": "plugh", "arg1": "xyzzy", "message": {"a": 1}}
adding callback
returning deferred
testAgent: returning deferred
<Deferred at 0x10b54d290>
Writing body now
response called
<twisted.web._newclient.Response object at 0x1080753d0>
Response version: ('HTTP', 1, 1)
Response code: 200
Response phrase: OK
Response headers:
Response body:
{"result": false}
^CUnhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/Twisted-13.1.0_r39314-py2.7-macosx-10.8-intel.egg/twisted/web/_newclient.py", line 1151, in _bodyDataFinished_CONNECTED
self._bodyProtocol.connectionLost(reason)
File "/Library/Python/2.7/site-packages/Twisted-13.1.0_r39314-py2.7-macosx-10.8-intel.egg/twisted/web/client.py", line 1793, in connectionLost
self.deferred.callback(b''.join(self.dataBuffer))
File "/Library/Python/2.7/site-packages/Twisted-13.1.0_r39314-py2.7-macosx-10.8-intel.egg/twisted/internet/defer.py", line 382, in callback
self._startRunCallbacks(result)
File "/Library/Python/2.7/site-packages/Twisted-13.1.0_r39314-py2.7-macosx-10.8-intel.egg/twisted/internet/defer.py", line 490, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/Library/Python/2.7/site-packages/Twisted-13.1.0_r39314-py2.7-macosx-10.8-intel.egg/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "AServer.py", line 85, in cbBody
print theRequest
exceptions.UnboundLocalError: local variable 'theRequest' referenced before assignment
Looking at this a little more, it seems that if I could figure out a way to get the request over to cbBody() this would all work just fine.
You can pass extra arguments to callbacks on a Deferred:
d.addCallback(f, x)
When d fires, the result is f(result of d, x). You can pass as many positional or keyword arguments as you like this way. See the API documentation for Deferred for more details.
is it possible to use redis's MOVE command to move all keys from 1 database to another? The move command only moves 1 key, but I need to move all the keys in the database.
I would recommend taking a look at the following alpha version app to backup and restore redis databases.. (you can install it via gem install redis-dump). You could redis-dump your databaseand then redis-load into another database via the --database argument.
redis-dump project
If this doesn't fit your purposes, you may need to make use of a scripting language's redis bindings (or alternatively throw something together using bash / redis-cli / xargs, etc). If you need assistance along these lines then we probably need more details first.
I've wrote a small python script to move data between two redis servers:(only support list and string types, and you must install python redis client):
'''
Created on 2011-11-9
#author: wuyi
'''
import redis
from optparse import OptionParser
import time
def mv_str(r_source, r_dest, quiet):
keys = r_source.keys("*")
for k in keys:
if r_dest.keys(k):
print "skipping %s"%k
continue
else:
print "copying %s"%k
r_dest.set(k, r_source.get(k))
def mv_list(r_source, r_dest, quiet):
keys = r_source.keys("*")
for k in keys:
length = r_source.llen(k)
i = 0
while (i<length):
print "add queue no.:%d"%i
v = r_source.lindex(k, i)
r_dest.rpush(k, v)
i += 1
if __name__ == "__main__":
usage = """usage: %prog [options] source dest"""
parser = OptionParser(usage=usage)
parser.add_option("-q", "--quiet", dest="quiet",
default = False, action="store_true",
help="quiet mode")
parser.add_option("-p", "--port", dest="port",
default = 6380,
help="port for both source and dest")
parser.add_option("", "--dbs", dest="dbs",
default = "0",
help="db list: 0 1 120 220...")
parser.add_option("-t", "--type", dest="type",
default = "normal",
help="available types: normal, lpoplist")
parser.add_option("", "--tmpdb", dest="tmpdb",
default = 0,
help="tmp db number to store tmp data")
(options, args) = parser.parse_args()
if not len(args) == 2:
print usage
exit(1)
source = args[0]
dest = args[1]
if source == dest:
print "dest must not be the same as source!"
exit(2)
dbs = options.dbs.split(' ')
for db in dbs:
r_source = redis.Redis(host=source, db=db, password="", port=int(options.port))
r_dest = redis.Redis(host=dest, db=db, password="", port=int(options.port))
print "______________db____________:%s"%db
time.sleep(2)
if options.type == "normal":
mv_str(r_source, r_dest, options.quiet)
elif options.type == "lpoplist":
mv_list(r_source, r_dest, options.quiet)
del r_source
del r_dest
you can try my own tool, rdd
it's a command line utility,
can dump database to a file, work on it (filter, match, merge, ...), and back it in a redis instance
take care, alpha stage, https://github.com/r043v/rdd/
Now that redis has scripting using lua, you can easily write a command that loops through all the keys, checks their type and moves them accordingly to a new database.
I suggest you can try it as below:
1. copy the rdb file to another dir;
2. modify the rdb file name;
3. modify the redis configure file adapter to the new db;