Getting error in a python script when using QuickSight API calls to retrieve the value of user parameter selection - api

I am working on a python script which will use QS APIs to retrieve the user parameter selections but keep getting the below error:
parameters = response['Dashboard']['Parameters'] KeyError: 'Parameters'
If I try a different code to retrieve the datasets in my QS account, it works but the Parameters code doesn't. I think I am missing some configuration.
#Code to retrieve the parameters from a QS dashboard (which fails):
import boto3
quicksight = boto3.client('quicksight')
response = quicksight.describe_dashboard(
AwsAccountId='99999999999',
DashboardId='zzz-zzzz-zzzz'
)
parameters = response['Dashboard']['Parameters']
for parameter in parameters:
print(parameter['Name'], ':', parameter['Value'])
#Code to display the datasets in the QS account (which works):
import boto3
import json
account_id = '99999999999'
session = boto3.Session(profile_name='default')
qs_client = session.client('quicksight')
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100)
results = response['DataSetSummaries']
while "NextToken" in response.keys():
response = qs_client.list_data_sets(AwsAccountId = account_id,MaxResults = 100,NextToken=response["NextToken"])
results.extend(response["DataSetSummaries"])
for i in results:
x = i['DataSetId']
try:
response = qs_client.describe_data_set(AwsAccountId=account_id,DataSetId=x)
print("succeeded loading: {} for data set {} ".format(x, response['DataSet']['Name']))
except:
print("failed loading: {} ".format(x))

Related

How to add username and name columns to pandas dataframe with search_all_tweets lookup in python

I am trying to collect tweets from 2022 using Twitter API. I can record the tweet_fields for the tweets, but I can't figure out how to add columns for the username and name (the user_fields) for each tweet.
I'm running the following code:
import requests
import os
import json
import tweepy
import pandas as pd
from datetime import timedelta
import datetime
bearer_token = "my_bearer_token_here"
keyword = "#WomeninSTEM"
start_time = "2022-01-01T12:01:00Z"
end_time = "2023-01-01T12:01:00Z"
client = tweepy.Client(bearer_token=bearer_token)
responses = client.search_all_tweets(query = "#WomeninSTEM", max_results= 500, start_time=start_time, end_time = end_time,
user_fields = ["username", "name"],
tweet_fields =["in_reply_to_user_id", "author_id", "lang",
"public_metrics", "created_at", "conversation_id"])
**##I can't get the username or name columns to work here.**
column = []
for i in range(len(responses.data)) :
row = []
Username = responses.data[i]["username"]
row.append(Username)
name = responses.data[i]["name"]
row.append(name)
text = responses.data[i].text
row.append(text)
favoriteCount = responses.data[i].public_metrics["like_count"]
row.append(favoriteCount)
retweet_count = responses.data[i].public_metrics["retweet_count"]
row.append(retweet_count)
reply_count = responses.data[i].public_metrics["reply_count"]
row.append(reply_count)
quote_count = responses.data[i].public_metrics["quote_count"]
row.append(quote_count)
created = responses.data[i].created_at
row.append(created)
ReplyTo = responses.data[i].text.split(" ")[0]
row.append(ReplyTo)
ReplyToUID = responses.data[i].in_reply_to_user_id
row.append(ReplyToUID)
ConversationID = responses.data[i]["conversation_id"]
row.append(ConversationID)
column.append(row)
data = pd.DataFrame(column)
Whenever I try and include username and name, I get this error:KeyError Traceback (most recent call last)
Assuming you're querying at https://api.twitter.com/2/tweets/[...], the response does not have a 'username' or a 'name' parameter, that's why you're getting a KeyError when trying to access them.
It does have an 'author_id' parameter, which you can use to perform an additional query at https://api.twitter.com/2/users/:id and retrieve 'username' and 'name'.
More info here and here.

How to perform a SQL query with SQLAlchemy to later pass it into a pandas dataframe

I have followed this article set up, it all seems to work properly, but I would like now to perform a SQL query and pass the result into a pandas data frame, how could I proceed?
This is what I have now;
host_server = os.environ.get('host_server', 'localhost')
db_server_port = urllib.parse.quote_plus(str(os.environ.get('db_server_port', '5432')))
database_name = os.environ.get('database_name', 'my_data_base123')
db_username = urllib.parse.quote_plus(str(os.environ.get('db_username', 'my_user_name123')))
db_password = urllib.parse.quote_plus(str(os.environ.get('db_password', 'my_password123')))
ssl_mode = urllib.parse.quote_plus(str(os.environ.get('ssl_mode','prefer')))
DATABASE_URL = 'postgresql://{}:{}#{}:{}/{}?sslmode={}'.format(db_username, db_password, host_server, db_server_port, database_name, ssl_mode)
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
engine = sqlalchemy.create_engine(
DATABASE_URL, pool_size=3, max_overflow=0
)
metadata.create_all(database)
app = FastAPI(title="REST API using FastAPI PostgreSQL Async EndPoints")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"]
)
#app.on_event("startup")
async def startup():
await database.connect()
#app.on_event("shutdown")
async def shutdown():
await database.disconnect()
Trying to access the database with a SQL query:
with engine.connect() as conn:
result = conn.execute(text("select 'hello world'))
print(result.all())
as it says in the sqlalchemy documentation , but i get some errors, like:
print(result.all())
AttributeError: 'ResultProxy' object has no attribute 'all'
even if i try to access the tables of my database
with engine.connect() as conn:
result = conn.execute(text("select * FROM users"))
print(result.all())
i get the same error
It is solved, I had to upgrade SQLAlchemy
sudo pip install sqlalchemy --upgrade

403 Access Denied Error while creating a dataset in DOMO

I'm getting an 404 access denied error while trying to create a dataset in DOMO, anyone who is good in DOMO please help me,
import logging
from pydomo import Domo
from pydomo.datasets import DataSetRequest, Schema, Column, ColumnType, Policy
from pydomo.datasets import PolicyFilter, FilterOperator, PolicyType, Sorting
client_id = ''
client_secret_code = ''
data_set_id = ''
api_host = 'api.domo.com'
domo = Domo(client_id, client_secret_code, logger_name='foo', log_level=logging.INFO,
api_host=api_host)
data_set_name = 'Testing'
data_set_description = 'Test_to_update_schema'
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logging.getLogger().addHandler(handler)
domo = Domo(client_id, client_secret_code, logger_name='foo', log_level=logging.INFO,
api_host=api_host)
dsr = DataSetRequest()
dsr.name = data_set_name
dsr.description = data_set_description
dsr.schema = Schema([Column(ColumnType.STRING, 'Id'),
Column(ColumnType.STRING, 'Publisher_Name')])
dataset = domo.datasets.create(dsr)
print(dataset)
This is Authorization error. Either of credentials are wrong that is client_id, client_secret_code or maybe both.
I have the same error in .NET when using Domo to connect and get data from the dataset. I fix this when I create a base64 string and passed it to auth for example Authorization is Basic
"Basic", Convert.ToBase64String(Encoding.UTF8.GetBytes($"{clientId}:{clientSecret}"))
I am not sure how this goes in Python but create a base64 string and pass for auth.

Telegram bot: How to get chosen inline result

I'm sending InlineQueryResultArticle to clients and i'm wondering how to get chosen result and it's data (like result_id,...).
here is the code to send results:
token = 'Bot token'
bot = telegram.Bot(token)
updater = Updater(token)
dispatcher = updater.dispatcher
def get_inline_results(bot, update):
query = update.inline_query.query
results = list()
results.append(InlineQueryResultArticle(id='1000',
title="Book 1",
description='Description of this book, author ...',
thumb_url='https://fakeimg.pl/100/?text=book%201',
input_message_content=InputTextMessageContent(
'chosen book:')))
results.append(InlineQueryResultArticle(id='1001',
title="Book 2",
description='Description of the book, author...',
thumb_url='https://fakeimg.pl/300/?text=book%202',
input_message_content=InputTextMessageContent(
'chosen book:')
))
update.inline_query.answer(results)
inline_query_handler = InlineQueryHandler(get_inline_results)
dispatcher.add_handler(inline_query_handler)
I'm looking for a method like on_inline_chosen(data) to get id of the chosen item. (1000 or 1001 for snippet above) and then send the appropriate response to user.
You should set /setinlinefeedback in #BotFather, then you will get this update
OK, i got my answer from here
Handling user chosen result:
from telegram.ext import ChosenInlineResultHandler
def on_result_chosen(bot, update):
print(update.to_dict())
result = update.chosen_inline_result
result_id = result.result_id
query = result.query
user = result.from_user.id
print(result_id)
print(user)
print(query)
print(result.inline_message_id)
bot.send_message(user, text='fetching book data with id:' + result_id)
result_chosen_handler = ChosenInlineResultHandler(on_result_chosen)
dispatcher.add_handler(result_chosen_handler)

Sending form data with an HTTP PUT request using Grinder API

I'm trying to replicate the following successful cURL operation with Grinder.
curl -X PUT -d "title=Here%27s+the+title&content=Here%27s+the+content&signature=myusername%3A3ad1117dab0ade17bdbd47cc8efd5b08" http://www.mysite.com/api
Here's my script:
from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPRequest
from HTTPClient import NVPair
import hashlib
test1 = Test(1, "Request resource")
request1 = HTTPRequest(url="http://www.mysite.com/api")
test1.record(request1)
log = grinder.logger.info
test1.record(log)
m = hashlib.md5()
class TestRunner:
def __call__(self):
params = [NVPair("title","Here's the title"),NVPair("content", "Here's the content")]
params.sort(key=lambda param: param.getName())
ps = ""
for param in params:
ps = ps + param.getValue() + ":"
ps = ps + "myapikey"
m.update(ps)
params.append(NVPair("signature", ("myusername:" + m.hexdigest())))
request1.setFormData(tuple(params))
result = request1.PUT()
The test runs okay, but it seems that my script doesn't actually send any of the params data to the API, and I can't work out why. There are no errors generated, but I get a 401 Unauthorized response from the API, indicating that a successful PUT request reached it, but obviously without a signature the request was rejected.
This isn't exactly an answer, more of a workaround that I came up with, that I've decided to post since this question hasn't yet received any responses, and it may help anyone else trying to achieve the same thing.
The workaround is basically to use the httplib and urllib modules to build and make the PUT request instead of the HTTPClient module.
import hashlib
import httplib, urllib
....
params = [("title", "Here's the title"),("content", "Here's the content")]
params.sort(key=lambda param: param[0])
ps = ""
for param in params:
ps = ps + param[1] + ":"
ps = ps + "myapikey"
m = hashlib.md5()
m.update(ps)
params.append(("signature", "myusername:" + m.hexdigest()))
params = urllib.urlencode(params)
print params
headers = {"Content-type": "application/x-www-form-urlencoded"}
conn = httplib.HTTPConnection("www.mysite.com:80")
conn.request("PUT", "/api", params, headers)
response = conn.getresponse()
print response.status, response.reason
print response.read()
conn.close()
(Based on the example at the bottom of this documentation page.)
You have to refer to the multi-form posting example in Grinder script gallery, but changing the Post to Put. It works for me.
files = ( NVPair("self", "form.py"), )
parameters = ( NVPair("run number", str(grinder.runNumber)), )
# This is the Jython way of creating an NVPair[] Java array
# with one element.
headers = zeros(1, NVPair)
# Create a multi-part form encoded byte array.
data = Codecs.mpFormDataEncode(parameters, files, headers)
grinder.logger.output("Content type set to %s" % headers[0].value)
# Call the version of POST that takes a byte array.
result = request1.PUT("/upload", data, headers)