When I specify fields in my API request for most things I get the single value. However when I want just the URL of a file, I get everything, or more than I want. If I specify file.data.url in my filter I get this:
It is a known issue of the Directus. Currently, fields in the param are working with the column of the database table.
The value of data variable is not the part of directus_files table.
Currently, you need to use the data object with all the values.
Add fields as paramater to request
?fields=file.data.full_url
I would recommend full_url rather than url as url is relative.
Related
The problem was to add a new field to existing datasource and fill it with some default value.
I have tried so via this aticle
But the actual result is that new column was added, but it filled with a null value.
Where I was wrong and can I fix it in the same way?
It would be hard to tell without looking at how you have added the new column in your ingestions spec.
I will suggest using druid unified console data loader UI > and parse your input data > define the additional column under transform section. The advantage of data loader UI is that you can preview the transformed result immediately and once the workflow is completed you will get an ingestions spec and can submit it from there itself.
Eg-
Transformation example
From the documentation here: https://druid.apache.org/docs/latest/design/segments.html#different-schemas-among-segments
Maybe the result is related to the fact that existing segments don't have the new field and therefore show null.
I successfully created a new table using the data I uploaded onto Google Cloud Platform's Storage, but the problem is the header field names are always wrong when I use the Automatically Detect setting, and set "Header rows to skip" to be 1...I just got generic names such as "string_field_0".
I know I can manually add field names under Schema, however, that is not feasible with tables that have many fields. Is there a way to fix the header names? It doesn't seem to be a big thing though...Pandas does this automatically all the time.
Thanks!
csv file in Excel:
The problem is that you only have String types in your file. So, BigQuery can't differentiate between the header and actual valid rows. If you had say another column with something other than a String e.g. Integer, then it will detect the column names. For example:
column1,column2,column3
foo,bar,1
cat,dog,2
fizz,buzz,3
Correctly loads as this because there is something other than just Strings in the data:
So, either you need to have something other than just Strings, or you need to explicitly specify the schema yourself.
Hint: you don't have the use the UI and click a load of buttons for define the schema. You can programatically do it using the API or the gcloud CLI tool.
Since it was not mentioned here, what helped me was to add 1 to Header rows to skip. You can find it under Advanced Options:
My database came from Google Sheet and it already had integer values in some columns.
Same issue occurs with Google Sheets as well. Right, the cause is having all string data in the sheet. But the workaround is simple with Google Sheets; just adding an integer column as described here
I'm searching a database for those that have accessed certain websites, but want to remove part of the url as the numbers within it are not static.
For instance if this "http://m.mlb.com/news/article/215311692/red-sox-send-off-equipment-on-truck-day/" was the url I want to remove the "215311692" so that i get all mlb.com results with that specific title.
Typically I use this code "ilike '%'||SUBSTRING(page_url, E'(([a-z0-9-]+\.){1,2}[a-z]{2,4})')||'%'" on the join. What do I need to add or change in order to remove the numbers in the the url in my search?
I want to delete mappings in DB using DataService.
For this purpose
I run search query for all ids in first thread group.
Using this method I put my Ids into the property.
Now property has view like that b69243ee6e9efdf66114200dc93881ac,b69243ee6e9efdf66114200dc90f5ba4,b69243ee6e9efdf66114200dc90e2184
I want to all delete mapping using this Ids from property one by one.
For this purpose I need run BeanShell Pre-Processor and choose first id and put it into variable. Please, help me with this script.
I believe ForEach Controller is the solution. But in my experience, I use regular expression extractor to grab the values and put it in one variable then loop it using ForEach Controller. I think these step will help you:
Add Regular Expression Extractor to grab the ids.
Make sure you fill field "Match No" with negative number (i.e. -1). Put the extracted value into one variable (i.e. IdVar. Fill Reference Name with IdVar). This step will grab all matched ids and put it into IdVar variable.
Then Add ForEach Controller to process each ids from IdVar variable.
For the details, just download this sample and try to run it.
I hope this will help you. :-)
I am working on a transformation step for Pentaho Kettle. It selects several input columns and based on that adds two new columns during transformation. I am unable to understand (based on code from other plugins), how I can add the two new columns so that 1) steps downstream are aware of these columns and 2) i can push the transformed data into these columns.
Thanks in advance.
You might need to override meta.getStepFields() to add new ValueMetaInterface objects to the RowMetaInterface passed in. This is the standard way to add columns at runtime; however, the row's metadata (i.e. list of ValueMetaInterface objects) must be the same from row to row or else the next step in your transformation will complain.
Often when doing data-driven custom plugins, you consume as many rows as you need (using getRow()) in order to figure out what the outgoing row format/metadata will be, then you can construct a RowMetaInterface (usually using meta.getStepFields()) that will be passed into the putRow() call. If you intend to pass through the incoming fields, do something like:
RowMetaInterface outputRowMeta = getInputRowMeta().clone();
If you're creating new rows use this:
RowMetaInterface outputRowMeta = new RowMeta();
Either way when you call meta.getStepFields(outputRowMeta, ...) it should populate outputRowMeta with the appropriate fields, by adding/changing/removing ValueMetaInterface objects from outputRowMeta.
I've got a blog post using Groovy to add/replace fields in the incoming rows here:
http://funpdi.blogspot.com/2014/10/flatten-json-to-key-value-pairs-in-pdi.html
Not sure if that is similar to your use case or not. If you have more questions, feel free to find me on IRC at ##pentaho (my nick is usually mburgess_pdi)
IF i have understood your question correctly, i think you are trying to create an output file with dynamic column. So you can do this by checking on the "fast dumping" option in Text File Output Step. While doing so , donot define any column names in the "Fields" tab
Check my image below:
Hope it helps :)