Trying to open a Azure blob file using GDAL. Getting an error "does not exist in the file system, and is not recognized as a supported dataset name." - blob

Trying to open a Azure blob file using GDAL. Getting an error "does not exist in the file system, and is not recognized as a supported dataset name."
gdal.SetConfigOption('AZURE_STORAGE_ACCOUNT', "accountname")
gdal.SetConfigOption('AZURE_STORAGE_ACCESS_KEY', "Key")
gdal.SetConfigOption('AZURE_NO_SIGN_REQUEST', 'YES')
a = gdal.Open('/vsiadls/container/folder/filename.tif')
type(a)
Tried vsiaz, vsiadls

Related

I am trying to perform a simple linear regression on a csv data set, but R won't read the dataset

I am running the code below to use a CSV file so that I can perform a linear regression. A few fixes I found here and on other sites included the "setwd" command and closing the CSV file before running the command. I am still generating the error.
setwd("C:/Users/Tommy/Desktop/")
dataset = file.choose("Project_subset.csv")
dataset = read.csv("dataset")`
> dataset = read.csv("dataset")
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : cannot open file 'dataset': No such file or directory
I appreciate the help on a simple problem.
I entered several different codes to read the csv file and none have been successful. I keep getting the error above that the file does not exist. I also used the file.exist() code and it returned FALSE. I am very confused as this seems to be a simple command to use.

how to read a mounted dbc file in databricks?

I try to read a dbc file in databricks (mounted from an s3 bucket)
the file path is:
file_location="dbfs:/mnt/airbnb-dataset-ml/dataset/airbnb.dbc"
how to read this file using spark?
I tried the code below:
df=spark.read.parquet(file_location)
But it generates and error:
AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
thanks for help !
I tried the code below: df=spark.read.parquet(file_location) But it
generates and error:
You are using spark.read.parquet but want to read dbc file. It won't work this way.
Don't use parquet but use load. Give file path with file name (without .dbc extension) in path parameter and dbc in format paramter.
Try below code:
df=spark.read.load(path='<file_path_with_filename>', format='dbc')
Eg: df=spark.read.load(path='/mnt/airbnb-dataset-ml/dataset/airbnb', format='dbc')

Convert excel subsheet into individual CSV in Azure storage cloud using Powershell

I am writing a powershell script which should do following things:
Take Excel file from Azure blob storage folder 'A' as input
Extract subsheets from Excel and converts into the individual CSV
Transfer those CSV to same blob storage in folder 'B'
I am able to do 1 and 2nd step but in 2nd step I am getting excel file object which i have to transfer to blob storage folder 'B'.This is where i am not able to proceed. As for copyinh file to Blob has 2 methods :
Start-AzureStorageBlobCopy - this cmdlets can copy only blob but as i said i have file object.(look below for better understanding)
$wb = $E.Workbooks.Open($sf)
foreach ($ws in $wb.Worksheets)
I meant,I have $ws which is a object of excel file.
2.Set-AzureStorageBlobContent - this cmdlet requires local system file path, it means this cmdlets only upload file to blob from local system directory.
Can anyone suggest me the correct method to tackle this situation? Any help would be appreciated.

SAP DS: Read input xml file result in an error

I am using SAP DATA Services v. 4.2.
I am trying to acquire an XML file in input.
I created a new XML Schema starting from a .xsd file
When i launch the job i have this error:
2076818752FIL-0522267/25/2017 2:56:35 PM|Data flow DF_FE_XXXX
2076818752FIL-0522267/25/2017 2:56:35 PM<XML file reader->READ MESSAGE XX_INPUT_FILE OUTPUT(XX_INPUT_FILE)> cannot find file location object <%1> in repository.
24736 20092 RUN-050304 7/26/2017 9:18:39 AM Function call <raise_exception ( Error 52226 gestito in Error_handling ) > failed, due to error <50316>
What am i doing wrong?
Thanks
problem in the way how you identify file location in Data File(s) section of your format, BODS thinks that you provide some File Location and it don't find such
for more information about "File Locations"

export table from PostgreSQL with ogr2ogr

I want to export a table inside my database as an MapInfo File. I will use the tool ogr2ogr.
this is the commend I found in the docu:
ogr2ogr -f "MapInfo File" test.mid PG:"host=localhost user=postgres dbname=Ocean_Extraction password=admin" "tablec"
after this I get an error message:
ERROR 6: Unable to open test.mif.
ERROR 1: MapInfo File driver failed to create test.mif
How can I avoid this? I do not want to open this file. I will create a new one base on the database table...
The error says that your ogr2ogr is not configured with the mapinfo driver.
You can check the supported formats with
ogr2ogr --formats
If you can't find mapinfo in that list, you need an ogr2ogr configured with mapinfo support, or you need to build it from source.