How to extract key and value from r=requests.get('http....') - api

So, i got
r=requests.get('http....')
r=r.json() // here I got json (probably)
then I want to print 'body' from this json
print(r['body'])
and i got eror: TypeError: list indices must be integers or slices, not str

Related

Append a value to json array

I have a postgresql table profile with columns (username, friends). username is a string and friends is a json type and has the current values ["Mary", "Bob]. I want to append an item at the end of the array so it is ["Mary", "Bob", "Alice"]
I am currently have tried:
UPDATE profile SET friends = friends || '["Alice"]'::jsonb WHERE username = 'David';
This yields the error:
[ERROR] 23:56:23 error: operator does not exist: json || jsonb
I tried changing the first expression to include json instead of jsonb but I then got the error:
[ERROR] 00:06:25 error: operator does not exist: json || json
Other answers seem to suggesst the || operator is indeed a thing, e.g.:
Appending (pushing) and removing from a JSON array in PostgreSQL 9.5+
How do I append an item to the end of a json array?
The data type json is much more limited than jsonb. Use jsonb instead of json and the solution you found simply works.
While stuck with json, a possible workaround is to cast to jsonb and back:
UPDATE profile
SET friends = (friends::jsonb || '["Alice"]')::json
WHERE username = 'David';
You might use an explicit type for jsonb '["Alice"]', too, but that's optional. While the expression is unambiguous like that, the untyped string literal will be coerced to jsonb automatically. If you instead provide a typed value in place of '["Alice"]', the explicit cast may be required.
If friends can be NULL, you'll want to define what should happen then. Currently, nothing happens. Or, to be precise, an update that doesn't change the NULL value.
Then again, if you only have simple arrays of strings, consider the plain Postgres array type text[] instead.

Getting DataFrame's Column value results in 'Column' object is not callable

For stream read from FileStore I'm trying to check if first column of first row value is equal to some string. Unfortunately while I access this column in any way e.g. launching .toList() on it, it throws
if df["Name"].iloc[0].item() == "Bob":
TypeError: 'Column' object is not callable
I'm calling the customProcessing function from:
df.writeStream\
.format("delta")\
.foreachBatch(customProcessing)\
[...]
And inside this function I'm trying to get the value, but none of the ways of getting the data works. The same error is being thrown.
def customProcessing(df, epochId):
if df["Name"].iloc[0].item() == "Bob":
[...]
Is there a possibility for reading single cols? Or it is writeStream specific and I'm unable to use conditions on that input?
There is no iloc for spark dataframes - this is not pandas; also there is no concept of index.
If you want to get the first item you could try
df.select('Name').limit(1).collect()[0][0] == "Bob"

What's the syntax to pass a dictionary as an input type in Flyte?

I'm trying to pass a dictionary as a parameter to a batch_sub_task, but I'm not quite sure how to define the #input.
Have you tried Types.Generic as the type? It maps to a json object in Python.
Use this to specify a simple JSON type.
When used with an SDK-decorated method, expect this behavior from the default type engine:
As input:
1) If set, a Python dict with JSON-ifiable primitives and nested lists or maps.
2) Otherwise, a None value will be received.
As output:
1) User code may pass a Python dict with arbitrarily nested lists and dictionaries. JSON-ifiable
primitives may also be specified.
2) Output can also be nulled with a None value.
From command-line:
Specify a JSON string.
.. code-block:: python
#inputs(a=Types.Generic)
#outputs(b=Types.Generic)
#python_task
def operate(wf_params, a, b):
if a['operation'] == 'add':
a['value'] += a['operand'] # a['value'] is a number
elif a['operation'] == 'merge':
a['value'].update(a['some']['nested'][0]['field'])
b.set(a)

Reading line by line in Julia

I am trying to read from a file where each line contains some integer
But when I gave like this
f=open("data.txt")
a=readline(f)
arr=int64[]
push!(arr,int(a))
I am getting
ERROR: no method getindex(Function)
in include_from_node1 at loading.jl:120
The error comes from int64[], since int64 is a function and you are trying to index it with []. To create an array of Int64 (note the case), you should use, e.g., arr = Int64[].
Another problem in your code is the int(a) - since you have an array of Int64, you should also specify the same type when parsing, e.g., push!(arr,parseint(Int64,a))

Nested NSDictionary in TXTRecordData Returns NULL

I'm using NSNetService and want to store some data in TXTRecordData. If I just store an NSString, it works OK - but if I store a nested dictionary then dataFromTXTRecord... returns nil. For example:
NSData* d = [NSNetService dataFromTXTRecordDictionary:#{#"A": #"B"}];
// d != nil
NSData* d = [NSNetService dataFromTXTRecordDictionary:#{#"A": #{#"X":#"Y"}}];
// d == nil
Obviously I seem to be abusing TXTRecordData but I'd like to understand what's going on. I even tried to serialize my nested dictionary to a string, but it still returns nil. TXTRecordData seems very particular. Anyone know why?
A Bonjour/DNS text record can only store a flat list of key/value pairs, not an arbitrary nested dictionary.
From DNS-SD (Rendezvous) TXT record format:
DNS-SD uses DNS TXT records to store arbitrary name/value pairs
conveying additional information about the named service. Each
name/value pair is encoded as it's own constituent string within the
DNS TXT record, in the form "name=value". Everything up to the first
'=' character is the name. Everything after the first '=' character to
the end of the string (including subsequent '=' characters, if any) is
the value.