I have a job which processes files and then lands them as single CSVs on a blob storage container. The problem I face is that I also need to land empty files, which only contain the header. How can this be achieved when I use .saveSingleFile?
Example Code snipped:
df.coalesce(1)
.write
.options(configuration.readAndWriteOptions)
.partitionBy(INGESTION_TIME)
.format("csv")
.mode("append")
.saveSingleFile(path.toString)
Example readAndWriteOptions:
{"sep": ";", "header": "true"}
In other words:
In above case, if df.show() is only displaying a header, no CSV file is written. However, I want to output a csv file without data but column names. Is there an option which would allow this ? Both cases need to be possible, if data is available and if data is not available, therefore something like .take(1) will not be a sufficient solution.
Update:
Looks like this is related to a Spark API Bug and should have been resolved with Version 3.
Related
I'm trying to see every file is a certain directory, but since each file in the directory is very large, I can't use sc.wholeTextfile or sc.textfile. I wanted to just get the filenames from them, and then pull the file if needed in a different cell. I can access the files just fine using Cyberduck and it shows the names on there.
Ex: I have the link for one set of data at "name:///mainfolder/date/sectionsofdate/indiviual_files.gz", and it works, But I want to see the names of the files in "/mainfolder/date" and in "/mainfolder/date/sectionsofdate" without having to load them all in via sc.textFile or sc.Wholetextfile. Both those functions work, so I know my keys are correct, but it takes too long for them to be loaded.
Considering that the list of files can be retrieve by one single node, you can just list the files in the directory. Look at this response.
wholeTextFiles returns a tuple (path, content) but I don't know if the file content is lazy to get only the first part of the tuple.
I have written a simple query and join it with a json reference data. I can see correct results when testing the query in "Test results" tab. However, no output is generated when starting the job.
I have confirmed that the output blob is created when no join with reference data is used in the query.
Any help is appreciated. The sample reference json follows:
[
{
"DeviceId":"DEV-021",
"Brand":"brand01",
"Model":"model01"
}
]
Use flat json structure instead of array. It should give you the output
Check the path you specified in the reference data, maybe it is not correct or you did not specify the file name. Does it contain something like {date}/{time}/filename.json?
If you forget to specify the file name, it does not work as well.
And if you are testing the job, usually you specify the file manually and that is why your query works.
For example, if my text data come from a database, how can I get one line/doc(as a database record) using the same mechanism (subclassing Dataset such that the pipeline described here still works) as TextLineDataset ?
By looking at the source code of TextLineDataset, I find that make_dataset_resource() seems an import method to be implemented. But I can't find where the actual code of yielding a line from a file as the docstring of TextLineDataset says: A Dataset comprising lines from one or more text files.
When I do an extract from multiple files and include part of the filename in the fields list and in the FROM clause (e.g. FROM "/input/filename-{filedate:*}.nc"), the resulting output file only contains a header row. If I remove "filedate" from the fields list and the FROM clause, I get the correct output.
I noticed in the job graph that when including "filedate", an "Empty Input" and an "Extract Cross" step is added before the "PodAggregate" step, and in the "Extract Cross" no data is written. What is this step?
Also, if I run the original extract including "filedate" locally, I get the correct output, so it's only in ADLA this error occurs.
I use a custom extractor and I don't know if this has anything to do with it. I haven't tested with a built-in extractor.
We released the new "fast file set" option by default. Unfortunately, it introduced a regression for some plans. Until we fix it, please add the following statement to your script:
SET ##InternalDebug = "FileSetV2:off";
Our apologies for any inconvenience this may have caused.
I need to store the data presented in the graphs on the Google Ngram website. For example, I want to store the occurences of "it's" as a percentage from 1800-2008, as presented in the following link: https://books.google.com/ngrams/graph?content=it%27s&year_start=1800&year_end=2008&corpus=0&smoothing=3&share=&direct_url=t1%3B%2Cit%27s%3B%2Cc0.
The data I want is the data you're able to scroll over on the graph. How can I extract this for about 140 different terms (e.g. "it's", "they're", "she's", etc.)?
econpy wrote a nice little module in Python that you can use through a command-line interface.
For your "it's" example, you would need to type this command in a terminal / windows console:
python getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
This will automatically save the query result in a CSV file named after your query parameters.
econpy's package, in #HugoMailhot's answer, no longer works (2021) and seems not maintained.
Here's a updated version, with some improvements for easier integration into Python code:
https://gitlab.com/cpbl/google-ngrams
You can call this from the command line (as in econpy's) to create a CSV file, e.g.
getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
or call it from python to get (and plot) data directly in python, e.g.:
from getngrams import ngrams
df = ngrams('bells and whistles -startYear=1900 -endYear=2018 -smoothing=2')
df.plot()
The xkcd functionality is still there too.
(Issues / bug fix pull requests /etc welcome there)