Add Array to field instead of String - directus
I have the following problem:
I want to add my field value the value of value= [0,16,33,50,67,84,101,118,135,152,169,186,203,220,237,254,271,288,305,322,338,355,372,389,406,423,440,457,474,491,508,525,542,559,576,593,610,627,644,661,677,694,711,728,745,762,779,796,813,830,847,864,881,898,915,932,949,966,983,1000,1016,1033,1050,1067,1084,1101,1118,1135,1152,1169,1186,1203,1220,1237,1254,1271,1288,1305,1322,1338,1355,1372,1389,1406,1423,1440,1457,1474,1491,1508,1525,1542,1559,1576,1593,1610,1627,1644,1661,1677]
I tried to use JSON or any other field type it return me the value as a string (with "") and as I am doing stuff, it would not work. How to work around this?
I'm not entirely sure if this answers your question, but Directus 6 saves data in only the MySQL 5 datatypes. Therefore, CSV / JSON values are saved as strings (often in the TEXT datatype). If you want to use this data in your application as an array / JSON, you will have to convert it yourself.
The Directus team is working to support more (custom) datatypes in future versions, so the API can respond with nested arrays/objects in JSON.
Related
Data Factory expression substring? Is there a function similar like right?
Please help, How could I extract 2019-04-02 out of the following string with Azure data flow expression? ABC_DATASET-2019-04-02T02:10:03.5249248Z.parquet The first part of the string received as a ChildItem from a GetMetaData activity is dynamically. So in this case it is ABC_DATASET that is dynamic. Kind regards, D
There are several ways to approach this problem, and they are really dependent on the format of the string value. Each of these approaches uses Derived Column to either create a new column or replace the existing column's value in the Data Flow. Static format If the format is always the same, meaning the length of the sections is always the same, then substring is simplest: This will parse the string like so: Useful reminder: substring and array indexes in Data Flow are 1-based. Dynamic format If the format of the base string is dynamic, things get a tad trickier. For this answer, I will assume that the basic format of {variabledata}-{timestamp}.parquet is consistent, so we can use the hyphen as a base delineator. Derived Column has support for local variables, which is really useful when solving problems like this one. Let's start by creating a local variable to convert the string into an array based on the hyphen. This will lead to some other problems later since the string includes multiple hyphens thanks to the timestamp data, but we'll deal with that later. Inside the Derived Column Expression Builder, select "Locals": On the right side, click "New" to create a local variable. We'll name it and define it using a split expression: Press "OK" to save the local and go back to the Derived Column. Next, create another local variable for the yyyy portion of the date: The cool part of this is I am now referencing the local variable array that I created in the previous step. I'll follow this pattern to create a local variable for MM too: I'll do this one more time for the dd portion, but this time I have to do a bit more to get rid of all the extraneous data at the end of the string. Substring again turns out to be a good solution: Now that I have the components I need isolated as variables, we just reconstruct them using string interpolation in the Derived Column: Back in our data preview, we can see the results: Where else to go from here If these solutions don't address your problem, then you have to get creative. Here are some other functions that may help: regexSplit left right dropLeft dropRight
how do i select certain key/value pair from json field inside a SQL table in SNOWFLAKE
I am currently working on building a dataware house in snowflake for the business that i work for and i have encounter some problems. I used to apply the function Json_value in TSQL for extracting certain key/value pair from json format field inside my original MSSQL DB. All the other field are in the regular SQL format but there is this one field that i really need that is formated in JSON and i can't seems to exact the key/value pair that i need. I'm new to SnowSQL and i can't seems to find a way to extract this within a regular query. Does anyone knows a way around my problem ? * ID /// TYPE /// Name (JSON_FORMAT)/// Amount * 1 5 {En: "lunch, fr: "diner"} 10.00 I would like to extract this line (for exemple) and be able to only retrieve the EN: "lunch" part from my JSON format field. Thank you !
Almost any time you use JSON in Snowflake, it's advisable to use the VARIANT data type. You can use the parse_json function to convert a string into a variant with JSON. select parse_json('{En: "lunch", fr: "diner"}') as VARIANT_COLUMN, VARIANT_COLUMN:En::string as ENGLISH_WORD; In this sample, the first column converts your JSON into a variant named VARIANT_COLUMN. The second column uses the variant, extracting the "En" property and casting it to a string data type. You can define columns as variant and store JSON natively. That's going to improve performance and allow parsing using dot notation in SQL.
For anyone else who also stumbles upon this question: You can also use JSON_EXTRACT_PATH_TEXT. Here is an example, if you wanted to create a new column called meal. select json_extract_path_text(Name,'En') as meal from ...
HANA: Unknown Characters in Database column of datatype BLOB
I need help on how to resolve characters of unknown type from a database field into a readable format, because I need to overwrite this value on database level with another valid value (in the exact format the application stores it in) to automate system copy acitvities. I have a proprietary application that also allows users to configure it in via the frontend. This configuration data gets stored in a table and the values of a configuration property are stored in a column of type "BLOB". For the here desired value, I provide a valid URL in the application frontend (like http://myserver:8080). However, what gets stored in the database is not readable (some square characters). I tried all sorts of conversion functions of HANA (HEX, binary), simple, and in a cascaded way (e.g. first to binary, then to varchar) to make it readable. Also, I tried it the other way around and make the value that I want to insert appear in the correct format (conversion to BLOL over hex or binary) but this does not work either. I copied the value to clipboard and compared it to all sorts of character set tables (although I am not sure if this can work at all). My conversion tries look somewhat like this: SELECT TO_ALPHANUM('') FROM DUMMY; while the brackets would contain the characters in question. I cant even print them here. How can one approach this and maybe find out the character set that is used by this application? I would be grateful for some more ideas.
What you have in your BLOB column is a series of bytes. As you mentioned, these bytes have been written by an application that uses an unknown character set. In order to interpret those bytes correctly, you need to know the character set as this is literally the mapping of bytes to characters or character identifiers (e.g. code points in UTF). Now, HANA doesn't come with a whole lot of options to work on LOB data in the first place and for C(haracter)LOB data most manipulations implicitly perform a conversion to a string data type. So, what I would recommend is to write a custom application that is able to read out the BLOB bytes and perform the conversion in that custom app. Once successfully converted into a string you can store the data in a new NVCLOB field that keeps it in UTF-8 encoding. You will have to know the character set in the first place, though. No way around that.
I assume you are on Oracle. You can convert BLOB to CLOB as described here. http://www.dba-oracle.com/t_convert_blob_to_clob_script.htm In case of your example try this query: select UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(<your_blob_value)) from dual; Obviously this only works for values below 32767 characters.
Use Postgres to parse stringified JSON object
I've been using Postgres to store JSON objects as strings, and now I want to utilize PG's built-in json and jsonb types to store the objects more efficiently. Basically, I want to parse the stringified JSON and put it in a json column from within PG, without having to resort to reading all the values into Python and parsing them there. Ideally, my migration should look like this: UPDATE table_name SET json_column=parse_json(string_column); I looked at Postgres's JSON functions, and there doesn't seem to be a method of doing this, even though it seems pretty trivial. For the record, my JSON objects are just one-dimensional arrays of strings. Is there any way to do this?
There is no need for a parse_json column, just change the type of the column: ALTER TABLE table_name ALTER COLUMN json_column TYPE json USING json_column::json; Note that if you plan on doing a lot of JSON operations on these values (i.e. extracting elements from objects, modifying objects etc) it's better to use jsonb. json should only be used for storing JSON data. Also, as Laurenz Albe points out, if you don't need to do any JSON operations on these values and you are not interested in the validation that postgresql can do on them (e.g. because you trust that the source always provides valid JSON), then using text is a perfectly valid option (or bytea).
Dynamic type cast in select query
I have totally rewritten my question because of inaccurate description of the problem! We have to store a lot of different informations about a specific region. For this we need a flexible data structure which does not limit the possibilities for the user. So we've create a key-value table for this additional data which is described through a meta table which contains the datatype of the value. We already use this information for queries over our rest api. We then automatically wrap the requested field with into a cast. SQL Fiddle We return this data together with information form other tables as a JSON object. We convert the corresponding rows from the data-table with array_agg and json_object into a JSON object: ... CASE WHEN count(prop.name) = 0 THEN '{}'::json ELSE json_object(array_agg(prop.name), array_agg(prop.value)) END AS data ... This works very well. Now the problem we have is if we store data like a floating point number into this field, we then get returned a string representation of this number: e.g. 5.231 returns as "5.231" Now we would like to CAST this number during our select statement into the right data-format so the JSON result would be correctly formatted. We have all the information we need so we tried following: SELECT json_object(array_agg(data.name), -- here I cast the value into the right datatype! -- results in an error array_agg(CAST(value AS datatype))) AS data FROM data JOIN ( SELECT name, datatype FROM meta) AS info ON info.name = data.name The error message is following: ERROR: type "datatype" does not exist LINE 3: array_agg(CAST(value AS datatype))) AS data ^ Query failed PostgreSQL said: type "datatype" does not exist So is it possible to dynamically cast the text of the data_type column to a postgresql type to return a well-formatted JSON object?
First, that's a terrible abuse of SQL, and ought to be avoided in practically all scenarios. If you have a scenario where this is legitimate, you probably already know your RDBMS so intimately, that you're writing custom indexing plugins, and wouldn't even think of asking this question... If you tell us what you're actually trying to do, there's about a 99.9% chance we can tell you a better way to do it. Now with that disclaimer aside: This is not possible, without using dynamic SQL. With a sufficiently recent version of PostgreSQL, you can accomplish this with the use of 'EXECUTE IMMEDIATE', which you can read about in the manual. It basically boils down to using EXEC. Note, however, that even using this method, the result for every row fetched in the same query must have the same data type. In other words, you can't expect that row 1 will have a data type of VARCHAR, and row 2 will have INT. That is completely impossible.
The problem you have is, that json_object does create an object out of a string array for the keys and another string array for the values. So if you feed your JSON objects into this method, it will always return an error. So the first problem is, that you have to use a JSON or JSONB column for the values. Or you can convert the values from string to json with to_json(). Now the second problem is that you need to use another method to create your json object because you want to feed it with a string array for the keys and a json-object array for the values. For this there is a method called json_object_agg. Then your output should be like the one you expected! Here the full query: SELECT json_object_agg(data.name, to_json(data.value)) AS data FROM data