python script lab to load csvjson - pymongo

I didn't connect Serialize a Class Object to JSON in Python
Prapare a python script to load csvjson.json file to Mongodatabase
Develop a REST API application using one of the frameworks such as Flask, FastAPI or Django
Create following services
a. List items (GET)
b. List items by Title, Body, SKU (GET) - Body should have wildcard search
c. Create item (POST)
d. Delete item (DELETE)
e. Export as CSV(GET)

Related

In Jmeter, How to generate unique order and pass it in request body. The request body is been sent by CSV file

Im passing multiple request bodies for an API using RequestBody.CSV file.
This requestbody have orderID and it should be a UUID every time. I'm passing this OrderID using User Parameters
And replaced them in the CSV file like this
This is the CVS data set config used
When I run the test, the orderID is not generating random number and it is passed as
This is the HTTP request im sending
How can I send random orderID in the request body.
In your CSV file change ${orderID} to ${__UUID}, JMeter's __UUID() function generates an unique UUID v4 structure each time it's being called.
In the HTTP Request sampler change ${requestbody} to ${__eval(${requestbody})}, JMeter's __eval() function evaluates nested JMeter Functions or Variables so an unique UUID will be generated on each iteration of each virtual user
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Auth0. How to retrieve over 1000 users (and make this call via a python script than be run as a cron job)

I am trying to use Auth0 to get a list of users when my user list is >1000 (approx 2000)
So I understand a bit better now how this works after following the steps at:
https://auth0.com/docs/manage-users/user-migration/bulk-user-exports
There are three steps:
Use a POST call to the https://MY_DOMAIN/oauth/token endpoint to get an auth token (done)
Then take this token and insert it into the next POST call to the endpoint: https://MY_DOMAIN/api/v2/jobs/users-exports
Then take the job_id and insert it into the 3rd GET call to the endpoint: https://MY_DOMAIN/api/v2/jobs/MY_JOB_ID
But this just gives me a link to a document that I download. Essentially is the same end result as using the User Import / Export extension.
This is NOT what I want. I want to be able to call an endpoint and have it return a list of all the users (similar to the Retrieve Users with the Get Users Endpoint). I require it is done this way, so I can write a python script and run it as a cron job.
However, since I have over 1000 users, I am getting the below error when I call the GET /API/v2/users endpoint.
auth0.v3.exceptions.Auth0Error: 400: You can only page through the first 1000 records. See https://auth0.com/docs/users/search/v3/view-search-results-by-page#limitation
Can anyone help? Can this be done all the way I wish it to be?

How to make a controller on Odoo for custom value?

I need to make a custom controller on Odoo for getting information from the particular task. And I can able to produce the result also. But now I'm facing an issue.
The client needs to retrieve the information with a particular field.
For example,
The client needs to retrieve the information with the tracking number and the data must be JSON format also. If the tracking number is 15556456356, the url should be www.customurl.com/dataset/15556456356
The route of that URL should be #http.route('/dataset/<string:tracking_number>', type='http or json', auth="user or public"), basically the method should be like this:
import json
from odoo import http
from odoo.http import Response, request
class tracking(http.Controller):
# if user must be authenticated use auth="user"
#http.route('/dataset/<string:tracking_number>', type='http', auth="public")
def tracking(self, tracking_number): # use the same variable name
result = # compute the result with the given tracking_number and the result should be a dict to pass it json.dumps
return Response(json.dumps(result), content_type='application/json;charset=utf-8',status=200)
This method accept http request and return a json response, if the client is sending a json requests you should change type='json'. don't forget to import the file in the __init___.py.
Lets take an example let say that I want to return some information about a sale.order by a giving ID in the URL:
import json
from odoo import http
from odoo.http import Response, request
class Tracking(http.Controller):
#http.route('/dataset/<int:sale_id>', type='http', auth="public")
def tracking(self, sale_id):
# get the information using the SUPER USER
result = request.env['sale.order'].sudo().browse([sale_id]).read(['name', 'date_order'])
return Response(json.dumps(result), content_type='application/json;charset=utf-8',status=200)
So when I enter this URL using my Browser: http://localhost:8069/dataset/1:

What data can I save from the spotify API?

I'm building a website and I'm using the Spotify API as a music library. I would like to add more filters and order options to search traks than the api allows me to so I was wondering what track/song data can I save to my DB from the API, like artist name or popularity.
I would like to save: Name, Artists, Album and some other stuff. Is that possible or is it against the terms and conditions?
Thanks in advance!
Yes, it is possible.
Data is stored in Spotify API in endpoints.
Spotify API endpoint reference here.
Each endpoint deals with the specific kind of data being requested by the client (you).
I'll give you one example. The same logic applies for all other endpoints.
import requests
"""
Import library in order to make api calls.
Alternatively, ou can also use a wrapper like "Spotipy"
instead of requesting directely.
"""
# hit desired endpoint
SEARCH_ENDPOINT = 'https://api.spotify.com/v1/search'
# define your call
def search_by_track_and_artist(artist, track):
path = 'token.json' # you need to get a token for this call
# endpoint reference page will provide you with one
# you can store it in a file
with open(path) as t:
token = json.load(t)
# call API with authentication
myparams = {'type': 'track'}
myparams['q'] = "artist:{} track:{}".format(artist,track)
resp = requests.get(SEARCH_ENDPOINT, params=myparams, headers={"Authorization": "Bearer {}".format(token)})
return resp.json()
try it:
search_by_track_and_artist('Radiohead', 'Karma Police')
Store the data and process it as you wish. But you must comply with Spotify terms in order to make it public.
sidenote: Spotipy docs.

How to pass parameters in post call in pentaho-spoon?

I have made an api and I want to access a post call in it. I made the following transformation in kettle:
with a params field in Generate Rows step as:
and REST Client step configuration as:
but I am unable to get any of the parameters in my post call on server side. If I write a simple post call in python as:
import requests
url = "http://10.131.70.73:5000/searchByLatest"
payload = {'search_query': 'donald trump', 'till_date': 'Tuesday, 7 June 2016 at 10:40'}
r = requests.post(url, params=payload)
print(r.text)
print(r.status_code)
I am able to get the parameters by request.args.get("search_query") on the client side in Flask. How can I make an equivalent POST call in kettle?
I found the solution myself eventually. Describe the fields in generate rows as:
and in the parameters tab in REST Client step, we should get the same fields:
Works perfect!