I am wondering if there is some way to read/update a data structure into PostgreSQL table "at once" like is possible with files.
Public Structure myStuct
Dim p_id As Integer
Dim p_myInt As Integer
Dim p_myString As String
Dim p_myDecNumber As Double
End Structure
Dim st as myStruct
FileGet (ff, recordnum, st)
OR
st.p_id = 1234
st.myInt = 14
st.myString = "tarzan"
st.myDecNumber = 3.14
FilePut (ff, recordnum, st)
Question is if we have table with formed data which is of same type like members of structure "st" wouldn't be possible to insert/update whole structure at some index in the table instead of writing every single member one-by-one like I do now?
Have you tried postgresql composite types?
E.g.
CREATE TYPE myStruct AS (
p_Id integer,
myInt integer,
myString text,
myDecNumber double precision
);
CREATE TABLE myStructs (
value myStruct
);
INSERT INTO myStructs VALUES ( ROW(1234, 14, 'tarzan', 3.14) );
http://www.postgresql.org/docs/9.2/static/rowtypes.html#AEN7268
Related
I write binary data from pictures into my SQLite database into a BLOB field called "icondata".
CREATE TABLE pictures(id INTEGER PRIMARY KEY AUTOINCREMENT, icondata BLOB)
The code snippet to write the binary data is:
SQLcommand.CommandText = "INSERT INTO pictures (id, icondata) VALUES (" & MyInteger & "," & "#img)"
SQLcommand.Prepare()
SQLcommand.Parameters.Add("#img", DbType.Binary, PicData.Length)
SQLcommand.Parameters("#img").Value = PicData
This works fine and I can find the BLOB values in the database using a tool like SQlite Spy.
But how can I search for a specific "icondata" BLOB?
If I try it with:
SQLcommand.CommandText = "SELECT id FROM pictures WHERE icondata=#img"
and with the same Parameters like above:
SQLcommand.Prepare()
SQLcommand.Parameters.Add("#img", DbType.Binary, PicData.Length)
SQLcommand.Parameters("#img").Value = PicData
Dim SQLreader As SQLite.SQLiteDataReader = SQLcommand.ExecuteReader()
While SQLreader.Read()
... Process found database entries....
End While
I don't get any result.
If I change the SELECT query to 'LIKE' instead of '=' I get all entries with BLOBs, not only the matching one.
How do I have to write the SELECT query to find the 1 exactly matching entry for a specific BLOB?
You can search on BLOBs here's some examples :-
DROP TABLE IF EXISTS pictures;
CREATE TABLE IF NOT EXISTS pictures (id INTEGER PRIMARY KEY, icondata columntypedoesnotmatter);
INSERT INTO pictures (icondata) VALUES
(x'fff1f2f3f4f5f6f7f8f9f0fff1f2f3f4f5f6f7f8f9f0'),
(x'ffffffffffffffffffff'),
(x'010203040506070809'),
(x'010203040506070809010203040506070809')
;
SELECT id, hex(icondata) FROM pictures WHERE icondata = x'010203040506070809' OR icondata = x'FFFFFFFFFFFFFFFFFFFF';
SELECT id, hex(icondata) FROM pictures WHERE icondata LIKE '%'||x'F0'||'%';
SELECT id, hex(icondata) FROM pictures WHERE hex(icondata) LIKE '%F0%';
This results in :-
First query as expected finds the 2 rows (2nd and 3rd)
i.e. SELECT id, hex(icondata) FROM pictures WHERE icondata = x'010203040506070809' OR icondata = x'FFFFFFFFFFFFFFFFFFFF'; results in :-
The Second query does not work as expected and is an example of how not to use LIKE :-
i.e. SELECT id, hex(icondata) FROM pictures WHERE icondata LIKE '%'||x'F0'||'%'; results in the unexpected two rows :-
I believe that this behaviour is due to only checking the higher/lower order bits but I can't find the relevant documentation.
The third query, however converts the stored blob into it's hexadecimal string representation using the hex function and compares that against the string representation of F0
i.e. SELECT id, hex(icondata) FROM pictures WHERE hex(icondata) LIKE '%F0%'; results in the expected single row :-
I've never used vb.net so I'm not sure how you'd code the above as passed parameters.
BigQuery and SQL noob here. I was going through possible data types big query supports here. I have a column in bigtable which is of type bytes and its original data type is scala Long. This was converted to bytes and stored in bigtable from my application code. I am trying to do CAST(itemId AS integer) (where itemId is the column name) in the BigQuery UI but the output of CAST(itemId AS integer) is 0 instead of actual value. I have no idea how to do this. If someone could point me in the right direction then I would greatly appreciate it.
EDIT: Adding more details
Sample itemId is 190007788462
Following is the code which writes itemId to the big table. I have included the relevant method. Using hbase client to write to bigtable.
import org.apache.hadoop.hbase.client._
def toPut(key: String, itemId: Long): Put = {
val TrxColumnFamily = Bytes.toBytes("trx")
val ItemIdColumn = Bytes.toBytes("itemId")
new Put(Bytes.toBytes(key))
.addColumn(TrxColumnFamily,
ItemIdColumn,
Bytes.toBytes(itemId))
}
Following is the entry in big table based on above code
ROW COLUMN+CELL
foo column=trx:itemId, value=\x00\x00\x00\xAFP]F\xAA
Following is the relevant code which reads the entry from big table in scala. This works correctly. Result is a org.apache.hadoop.hbase.client.Result
private def getItemId(row: Result): Long = {
val key = Bytes.toString(row.getRow)
val TrxColumnFamily = Bytes.toBytes("trx")
val ItemIdColumn = Bytes.toBytes("itemId")
val itemId =
Bytes.toLong(row.getValue(TrxColumnFamily, ItemIdColumn))
itemId
}
The getItemId function above correctly returns itemId. That's because Bytes.toLong is part of org.apache.hadoop.hbase.util.Bytes which correctly casts the Byte string to Long.
I am using big query UI similar to this one and using CAST(itemId AS integer) because BigQuery doesn't have a Long data type. This incorrectly casts the itemId byte string to integer and resulting value is 0.
Is there any way I can have a Bytes.toLong equivalent from hbase-client in BigQuery UI? If not is there any other way I can go about this issue?
Try this:
SELECT CAST(CONCAT('0x', TO_HEX(itemId)) AS INT64) AS itemId
FROM YourTable;
It converts the bytes into a hex string, then casts that string into an INT64. Note that the query uses standard SQL, as opposed to legacy SQL. If you want to try it with some sample data, you can run this query:
WITH `YourTable` AS (
SELECT b'\x00\x00\x00\xAFP]F\xAA' AS itemId UNION ALL
SELECT b'\xFA\x45\x99\x61'
)
SELECT CAST(CONCAT('0x', TO_HEX(itemId)) AS INT64) AS itemId
FROM YourTable;
Given the following Persistent database definition:
share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
Mount
name String
UniqueName name
desc String
deriving Show
FaceSlope
slope Double
mountId MountId
MountIdForFaceSlope mountId
deriving Show
FaceDimensions
zTop Double
zBtm Double
zHeight Double default=5.0
leftx Double
lefty Double
rightx Double
righty Double
mountId MountId
MountIdForFaceDimensions mountId
deriving Show
CurrentMount
mountId MountId
deriving Show
|]
I need to add zHeight to the FaceDimensions table. I gave it a default value for any existing rows already in the db.
When I runMigration migrateAll I get the following error.
PersistMarshalError "field zHeight: Expected Double, received:
PersistText \"z_height\""
To get this to work, I had to manually add the field to the sqlite database.
Is there a way I could have done this directly through Persistent?
Alec:
No it did not work without the default. The problem was in adding a new field, where there are existing rows which do not contain any value for the new field. That is why I tried the default, to see if the pre-existing rows would get the default value when the new field(column) was inserted. I reproduced a similar error here.
share
[mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
WristDimensions
name String
UniqueWristDimensionName name
desc String
squaredOffRiserHeight Double
radius Double
power Double
deriving Show
|]
Now I add in some data to populate a row so there is pre-existing data in the table. Then trying adding a new field (newField Double).
share
[mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
WristDimensions
name String
UniqueWristDimensionName name
desc String
squaredOffRiserHeight Double
radius Double
power Double
newField Double
deriving Show
|]
Now when I run migration:
Giving the following error.
Migrating: CREATE TEMP TABLE
"wrist_dimensions_backup"("id" INTEGER PRIMARY KEY,"name"
VARCHAR NOT NULL,"desc" VARCHAR NOT NULL,
"squared_off_riser_height" REAL NOT NULL,
"radius" REAL NOT NULL,"power" REAL NOT NULL,"new_field" REAL NOT
NULL,CONSTRAINT "unique_wrist_dimension_name" UNIQUE ("name"))
Migrating: INSERT INTO "wrist_dimensions_backup"
("id","name","desc","squared_off_riser_height","radius","power") SELECT
"id","name","desc","squared_off_riser_height","radius","power" FROM
"wrist_dimensions"
ChampCad-exe: SQLite3 returned ErrorConstraint while attempting to perform step.
Maybe instead of: runMigration migrateAll there is a way of updating that gets around this constraint.
I have an XML column:
<xmlList>
<XMLEntity>
<sug>ACHER</sug>
</XMLEntity>
<XMLEntity>
<sug>DOA</sug>
</XMLEntity>
</xmlList>
The sug can hold only a enum memeber(ACHER or DOA). I would like to check if there is a sug without one of these values.
In this way I get just the sug node where it is one of the enum values:
SELECT XMLSERIALIZE(XMLQUERY ('//xmlList/XMLEntity/sug[.="ACHER"]' passing
KTOVET ) as char large object) as XXX ,
XMLSERIALIZE(XMLQUERY ('//xmlList/XMLEntity/sug[.="DOA"]' passing
KTOVET ) as char large object) as YYY
FROM "TABLE"
I would like to get the sug nodes where the value is not one of the enums value. Possible?
How can I get the sug nodes where its value is "ACHER"?
SELECT XMLSERIALIZE(XMLQUERY ('//xmlList/XMLEntity/sug[.!="ACHER" and .!="DOA"]'
passing KTOVET ) as char large object) as XXX
FROM "TABLE"
I have a question about JSONStore searchFields.
If I use number as the searchFields key and try to find data by WL.JSONStore.find method with 0 as the query, It will hit all data (not filtered).
With the integer of the case above works fine.
What's the difference between number and integer?
JSONStore uses SQLite to persist data, you can read about SQLite Data Types here. The short answer is number will store data as REAL while integer will store data as INTEGER.
If you create a collection called nums with one searchField called num of type number
var nums = WL.JSONStore.initCollection('nums', {num: 'number'}, {});
and add some data:
var len = 5;
while (len--) {
nums.add({num: len});
}
then call find with the query: {num: 0}
nums.find({num: 0}, {onSuccess: function (res) {
console.log(JSON.stringify(res));
}})
you should get back:
[{"_id":1,"json":{"num":4}},{"_id":2,"json":{"num":3}},{"_id":3,"json":{"num":2}},{"_id":4,"json":{"num":1}},{"_id":5,"json":{"num":0}}]
Notice that you got back all the documents you stored (num = 4, 3, 2, 1, 0).
If you look at the .sqlite file:
$ cd ~/Library/Application Support/iPhone Simulator/6.1/Applications/[id]/Documents
$ sqlite3 jsonstore.sqlite
(The android file should be under /data/data/com.[app-name]/databases/)
sqlite> .schema
CREATE TABLE nums ( _id INTEGER primary key autoincrement, 'num' REAL, json BLOB, _dirty REAL default 0, _deleted INTEGER default 0, _operation TEXT);
Notice the data type for num is REAL.
Running a query the same query used in the find function:
sqlite> SELECT * FROM nums WHERE num LIKE '%0%';
1|4.0|{"num":4}|1363326259.80431|0|add
2|3.0|{"num":3}|1363326259.80748|0|add
3|2.0|{"num":2}|1363326259.81|0|add
4|1.0|{"num":1}|1363326259.81289|0|add
5|0.0|{"num":0}|1363326259.81519|0|add
Notice 4 is stored as 4.0 and JSONStore's queries always use LIKE, any num with a 0 will match the query.
If you use integer instead:
var nums = WL.JSONStore.initCollection('nums', {num: 'integer'}, {});
Find returns:
[{"_id":5,"json":{"num":0}}]
The schema shows that num has an INTEGER data type:
sqlite> .schema
CREATE TABLE nums ( _id INTEGER primary key autoincrement, 'num' INTEGER, json BLOB, _dirty REAL default 0, _deleted INTEGER default 0, _operation TEXT);
sqlite> SELECT * FROM nums WHERE num LIKE '%0%';
5|0|{"num":0}|1363326923.44466|0|add
I skipped some of the onSuccess and all the onFailure callbacks for brevity.
The actual difference between a JSON number and integer is
defining {age: 'number'} indexes 1 as 1.0,
while defining{age: 'integer'} indexes 1 as 1.
Hope you understand