How to solve an abstract model using Pyomo in Google Colab - google-colaboratory

I am learning how to use Pyomo in Google Colab and I created an Abstract model, but I dont know the coding to read the data file and solve the model. The documentation gives instructions about the command prompt but it is not the case as I am working with Google Colab.
I will highly appreaciate your help.

!pip install pyomo
from pyomo.environ import *
import matplotlib.pyplot as plt
!wget -N -q "https://ampl.com/dl/open/ipopt/ipopt-linux64.zip"
!unzip -o -q ipopt-linux64
model = AbstractModel()
model.x = Var(bounds=(0,1.2), within=Reals)
model.obj1 = Objective(expr=model.x**2, sense=maximize)
#opt = SolverFactory('ipopt')
opt=SolverFactory('ipopt', executable='/content/ipopt')
instance = model.create_instance()
results = opt.solve(instance) # solves and updates instance
print('OF= ',value(instance.obj1))

Related

Plot a subset of data from a grib file on google colab

I'm trying to plot a subset of a field from a grib file on google colab. The issue I am finding is that due to google colab using an older version of python I can't get enough libraries to work together to 1.) get a field from the grib file and then 2.) extract a subset of that field by lat/lon, and then 3.) be able to plot with matplotlib/cartopy.
I've been able to do each of the above steps on my own PC and there are numerous answers on this forum already that work away from colab, so the issue is related to making it work on the colab environment, which uses python 3.7.
For simplicity, here are some assumptions that could be made for anybody who wants to help.
1.) Use this file, since its been what I have been trying to use:
https://noaa-hrrr-bdp-pds.s3.amazonaws.com/hrrr.20221113/conus/hrrr.t18z.wrfnatf00.grib2
2.) You could use any field, but I've been extracting this one (output from pygrib):
14:Temperature:K (instant):lambert:hybrid:level 1:fcst time 0 hrs:from 202211131800
3.) You can get this data in zarr format from AWS, but the grib format uploads to the AWS database faster so I need to use it.
Here are some notes on what I've tried:
Downloading the data isn't an issue, it's mostly relating to extracting the data (by lat lon) that is the main issue. I've tried using condacolab or pip to download pygrib, pupygrib, pinio, or cfgrib. I can then use these to download the data above.
I could never get pupygrib or pinio to even download correctly. Cfgrib I was able to get it to work with conda, but then xarray fails when trying to extract fields due to a library conflict. Pygrib worked the best, I was able to extract fields from the grib file. However, the function grb.data(lat1=30,lat2=40,lon1=-100,lon2-90) fails. It dumps the data into 1d arrays instead of 2d as it is supposed to per the documentation found here: https://jswhit.github.io/pygrib/api.html#example-usage
Here is some code I used for the pygrib set up in case that is useful:
!pip install pyproj
!pip install pygrib
# Uninstall existing shapely
!pip uninstall --yes shapely
!apt-get install -qq libgdal-dev libgeos-dev
!pip install shapely --no-binary shapely
!pip install cartopy==0.19.0.post1
!pip install metpy
!pip install wget
!pip install s3fs
import time
from matplotlib import pyplot as plt
import numpy as np
import scipy
import pygrib
import fsspec
import xarray as xr
import metpy.calc as mpcalc
from metpy.interpolate import cross_section
from metpy.units import units
from metpy.plots import USCOUNTIES
import cartopy.crs as ccrs
import cartopy.feature as cfeature
!wget https://noaa-hrrr-bdp-pds.s3.amazonaws.com/hrrr.20221113/conus/hrrr.t18z.wrfnatf00.grib2
grbs = pygrib.open('/content/hrrr.t18z.wrfnatf00.grib2')
grb2 = grbs.message(1)
data, lats, lons = grb2.data(lat1=30,lat2=40,lon1=-100,lon2=-90)
data.shape
This will output a 1d array for data, or lats and lons. That is as far as I can get here because existing options like meshgrib don't work on big datasets (I tried it).
The other option is to get data this way:
grb_t = grbs.select(name='Temperature')[0]
This is plottable, but I don't know of a way to extract a subset of the data from here using lat/lons.
If you can help, feel free to ask me anything I can add more details, but since I've tried like 10 different ways probably no sense in adding every failure. Really, I am open to any way to accomplish this task. Thank you.

Synapse Analytics Auto ML Predict No module named 'azureml.automl'

I follow the official tutotial from microsoft: https://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool
When I execute:
#Bind model within Spark session
model = pcontext.bind_model(
return_types=RETURN_TYPES,
runtime=RUNTIME,
model_alias="Sales", #This alias will be used in PREDICT call to refer this model
model_uri=AML_MODEL_URI, #In case of AML, it will be AML_MODEL_URI
aml_workspace=ws #This is only for AML. In case of ADLS, this parameter can be removed
).register()
I got : No module named 'azureml.automl'
My Notebook
As per the repro from my end, the above code which you have shared works as excepted and I don't see any error message which you are experiencing.
I had even tested the same code on the newly created Apache spark 3.1 runtime and it works as expected.
I would request you to create a new cluster and see if you are able to run the above code.
I solved it. In my case it works best like this:
Imports
#Import libraries
from pyspark.sql.functions import col, pandas_udf,udf,lit
from notebookutils.mssparkutils import azureML
from azureml.core import Workspace, Model
from azureml.core.authentication import ServicePrincipalAuthentication
from azureml.core.model import Model
import joblib
import pandas as pd
ws = azureML.getWorkspace("AzureMLService")
spark.conf.set("spark.synapse.ml.predict.enabled","true")
Predict function
def forecastModel():
model_path = Model.get_model_path(model_name="modelName", _workspace=ws)
modeljob = joblib.load(model_path + "/model.pkl")
validation_data = spark.read.format("csv") \
.option("header", True) \
.option("inferSchema",True) \
.option("sep", ";") \
.load("abfss://....csv")
validation_data_pd = validation_data.toPandas()
predict = modeljob.forecast(validation_data_pd)
return predict

Knowledge required to understand YOLOR

I'm trying to learn YOLOR but everything look like a alien language to me
so for the expert out there
What knowledge do I need to have to start learning and implement this object detection model?
Do I need to learn Yaml and Shell Command on Pycharm to run YOLOR
what I know:
Basic python
CNN
Understand how YOLOR dataset labeled
Thank you for sharing your knowledge
Before you start using YOLOR I suggest that you begin with YOLOv5 that is easier to learn and implement. You need to know how to run Shell Command on Pycharm. Here is a simple code that you can run on Pycharm:
import torch
import numpy as np
import cv2
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
results = model(frame)
results.print()
cv2.imshow('YOLO', np.squeeze(results.render()))
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
before you run this code you need to install and import some dependencies:
install pytorch:
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
import YOLOv5: git clone https://github.com/ultralytics/yolov5
install the requirements: cd yolov5 & pip install -r requirements.txt

How to run a specific code from Google Colaboratory in Jupyter Notebook?

I am currently doing a Tensorflow course on Udacity. The code is executed there in Google Colaboratory.
Now I tryed to run the code locally on Jupyter Notebook. By doing this I get an error message.
The code is:
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
zip_dir_base = os.path.dirname(zip_dir)
!find $zip_dir_base -type d -print
In Google Colaboratory it runs fine. On Jupyter Notebook I get after the last part the error message that it is the false date type.
What does the following line means?
!find $zip_dir_base -type d -print
What do I have to change to run it in Jupyter Notebook?
You can check out this code from deep learning with python book. its the same datasets but it runs locally.
https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.2-using-convnets-with-small-datasets.ipynb

Why does jupyter fail to display displacy results?

I'm trying to run displacy in jupyter notebook but I get the following error:
TypeError: init() got an unexpected keyword argument 'encoding'
Code:
import en_core_web_sm
import spacy
nlp = en_core_web_sm.load()
from spacy import displacy
doc4 = nlp("The rat the cat the dog chased killed ate the malt")
displacy.render(doc4, style ='dep', jupyter = True)
However, the dependencies are appearing through the following code:
for x in doc4:
print(x.text, x.pos_, x.dep_)
Can someone please guide as to what's wrong?
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
doc4 = nlp("The rat the cat the dog chased killed ate the malt")
displacy.render(doc4, style ='dep', jupyter = True)
(and your exact code also) works for me on spacy==2.0.16
You can try it out also on a jupyter running in the cloud, such as https://colab.research.google.com