I'm trying to put together a demo of Neptune using Neptune workbench, but something's not working right. I've got this block set up:
from __future__ import print_function # Python 2/3 compatibility
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
graph = Graph()
cluster_url = #my cluster
remoteConn = DriverRemoteConnection( f'wss://{cluster_url}:8182/gremlin','g')
g = graph.traversal().withRemote(remoteConn)
import uuid
tmp = uuid.uuid4()
tmp_id=str(id)
def get_id(name):
uid = uuid.uuid5(uuid.NAMESPACE_DNS, f"{name}.licensing.company.com")
return str(uid)
def add_sku(name):
tmp_id = get_id(name)
g.addV('SKU').property('id', tmp_id, 'name', name)
return name
def get_values():
return g.V().properties().toList()
The problem is that calling add_sku doesn't result in a vertex being added to the graph. Doing the same operation in a cell with gremlin magic works, and I can retrieve values through python, but I can't add vertices. Does anyone see what I'm missing here?
The Python code is not working because it is missing a terminal step (next() or iterate()) on the end of it which forces it to evaluate. If you add the terminal step it should work:
g.addV('SKU').property('id', tmp_id, 'name', name).next()
Related
I follow the official tutotial from microsoft: https://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool
When I execute:
#Bind model within Spark session
model = pcontext.bind_model(
return_types=RETURN_TYPES,
runtime=RUNTIME,
model_alias="Sales", #This alias will be used in PREDICT call to refer this model
model_uri=AML_MODEL_URI, #In case of AML, it will be AML_MODEL_URI
aml_workspace=ws #This is only for AML. In case of ADLS, this parameter can be removed
).register()
I got : No module named 'azureml.automl'
My Notebook
As per the repro from my end, the above code which you have shared works as excepted and I don't see any error message which you are experiencing.
I had even tested the same code on the newly created Apache spark 3.1 runtime and it works as expected.
I would request you to create a new cluster and see if you are able to run the above code.
I solved it. In my case it works best like this:
Imports
#Import libraries
from pyspark.sql.functions import col, pandas_udf,udf,lit
from notebookutils.mssparkutils import azureML
from azureml.core import Workspace, Model
from azureml.core.authentication import ServicePrincipalAuthentication
from azureml.core.model import Model
import joblib
import pandas as pd
ws = azureML.getWorkspace("AzureMLService")
spark.conf.set("spark.synapse.ml.predict.enabled","true")
Predict function
def forecastModel():
model_path = Model.get_model_path(model_name="modelName", _workspace=ws)
modeljob = joblib.load(model_path + "/model.pkl")
validation_data = spark.read.format("csv") \
.option("header", True) \
.option("inferSchema",True) \
.option("sep", ";") \
.load("abfss://....csv")
validation_data_pd = validation_data.toPandas()
predict = modeljob.forecast(validation_data_pd)
return predict
to_sql() is printing every insert statement within my Jupyter Notebook and this makes everything run very slowly for millions of records. How can I decrease the verbosity significantly? I haven't found any verbosity setting of this function. I've tried %%capture as written here. The same method works for another simple test case with print() but not for to_sql(). How do you suppress output in IPython Notebook?
from sqlalchemy import create_engine, NVARCHAR
import cx_Oracle
df.to_sql('table_name', engine, if_exists='append',
schema='schema', index=False, chunksize=10000,
dtype={col_name: NVARCHAR(length=20) for col_name in df} )
Inside create_engine(), set echo=False and all logging will be disabled. More detail here: https://docs.sqlalchemy.org/en/13/core/engines.html#more-on-the-echo-flag.
from sqlalchemy import create_engine, NVARCHAR
import cx_Oracle
host='hostname.net'
port=1521
sid='DB01' #instance is the same as SID
user='USER'
password='password'
sid = cx_Oracle.makedsn(host, port, sid=sid)
cstr = 'oracle://{user}:{password}#{sid}'.format(
user=user,
password=password,
sid=sid
)
engine = create_engine(
cstr,
convert_unicode=False,
pool_recycle=10,
pool_size=50,
echo=False
)
Thanks to #GordThompson for pointing me in the right direction!
I am trying to use SciBERT pre-trained model, namely: scibert-scivocab-uncased the following way:
!pip install pytorch-pretrained-bert
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
import logging
import matplotlib.pyplot as plt
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertModel.from_pretrained('/Users/.../Downloads/scibert_scivocab_uncased-3.tar.gz')
And I get the following error:
EOFError: Compressed file ended before the end-of-stream marker was reached
I downloaded the file from the website (https://github.com/allenai/scibert)
I converted it from "tar" to gzip
Nothing worked.
Any hint on how to approach this?
Thank you!
In the new version of pytorch-pretrained-BERT i.e. in transformers, you can do the following to load a pretrained model after you un-tar:
import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("/your/local/path/to/scibert_scivocab_uncased")
Need to unzip the package and rename the json file to config.json
Then just address the folder pathname where you have unzipped the package. It should work
I am new to JupyterLab trying to learn.
When I try to plot a graph, it works fine on jupyter notebook, but does not show the result on jupyterlab. Can anyone help me with this?
Here are the codes below:
import pandas as pd
import pandas_datareader.data as web
import time
# import matplotlib.pyplot as plt
import datetime as dt
import plotly.graph_objects as go
import numpy as np
from matplotlib import style
# from matplotlib.widgets import EllipseSelector
from alpha_vantage.timeseries import TimeSeries
Here is the code for plotting below:
def candlestick(df):
fig = go.Figure(data = [go.Candlestick(x = df["Date"], open = df["Open"], high = df["High"], low = df["Low"], close = df["Close"])])
fig.show()
JupyterLab Result:
Link to the image (JupyterLab)
JupyterNotebook Result:
Link to the image (Jupyter Notebook)
I have updated both JupyterLab and Notebook to the latest version. I do not know what is causing JupyterLab to stop showing the figure.
Thank you for reading my post. Help would be greatly appreciated.
Note*
I did not include the parts for data reading (Stock OHLC values). It contains the API keys. I am sorry for inconvenience.
Also, this is my second post on stack overflow. If this is not a well-written post, I am sorry. I will try to put more effort if it is possible. Thank you again for help.
TL;DR:
run the following and then restart your jupyter lab
jupyter labextension install #jupyterlab/plotly-extension
Start the lab with:
jupyter lab
Test with the following code:
import plotly.graph_objects as go
from alpha_vantage.timeseries import TimeSeries
def candlestick(df):
fig = go.Figure(data = [go.Candlestick(x = df.index, open = df["1. open"], high = df["2. high"], low = df["3. low"], close = df["4. close"])])
fig.show()
# preferable to save your key as an environment variable....
key = # key here
ts = TimeSeries(key = key, output_format = "pandas")
data_av_hist, meta_data_av_hist = ts.get_daily('AAPL')
candlestick(data_av_hist)
Note: Depending on system and installation of JupyterLab versus bare Jupyter, jlab may work instead of jupyter
Longer explanation:
Since this issue is with plotly and not matplotlib, you do NOT have to use the "inline magic" of:
%matplotlib inline
Each extension has to be installed to the jupyter lab, you can see the list with:
jupyter labextension list
For a more verbose explanation on another extension, please see related issue:
jupyterlab interactive plot
Patrick Collins already gave the correct answer.
However, the current JupyterLab might not be supported by the extension, and for various reasons one might not be able to update the JupyterLab:
ValueError: The extension "#jupyterlab/plotly-extension" does not yet support the current version of JupyterLab.
In this condition a quick workaround would be to save the image and show it again:
from IPython.display import Image
fig.write_image("image.png")
Image(filename='image.png')
To get the write_image() method of Plotly to work, kaleido must be installed:
pip install -U kaleido
This is a full example (originally from Plotly) to test this workaround:
import os
import pandas as pd
import plotly.express as px
from IPython.display import Image
df = pd.DataFrame([
dict(Task="Job A", Start='2009-01-01', Finish='2009-02-28', Resource="Alex"),
dict(Task="Job B", Start='2009-03-05', Finish='2009-04-15', Resource="Alex"),
dict(Task="Job C", Start='2009-02-20', Finish='2009-05-30', Resource="Max")
])
fig = px.timeline(df, x_start="Start", x_end="Finish", y="Resource", color="Resource")
if not os.path.exists("images"):
os.mkdir("images")
fig.write_image("images/fig1.png")
Image(filename='images/fig1.png')
I'm following this tutorial:
https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-python.html
How can I add a node and then retrieve the same node?
from __future__ import print_function # Python 2/3 compatibility
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
graph = Graph()
remoteConn = DriverRemoteConnection('wss://your-neptune-endpoint:8182/gremlin','g')
g = graph.traversal().withRemote(remoteConn)
print(g.V().limit(2).toList())
remoteConn.close()
All the above right now is doing is retrieving 2 nodes right?
If you want to add a vertex and then return the information about the vertex (assuming you did not provide your own ID) you can do something like
newId = g.addV("mylabel").id().next()