I have a website that I host with GoDaddy. At the moment if I want to run an SQL Query on the database I'm using GoDaddy's default phpMyAdmin interface to interact with the database (all done through the browser).
I have a table with 32,000 records. One of the items in the table contains a JSON string that looks something like this:
{
"activity": {
"section1": {
"item1": {
"itemName": {
"name": "myName",
"property1": false,
"property2": false
}
}
},
"section2":{
"item1": false
}
}
}
Overtime I may want to update this JSON string (e.g. if the schema is updated and I want to add a section3 there. If I try to do this now (even if the new string is hardcoded and is the same for each of the 32000 records in the table, the query just times out. I suspect 32000 is too many records for this operation.
I tried running a query through PhpMyAdmin's SQL Query tab - that failed, it got through to about half way and then it timed out.
My question is: what is the best to work with the database? Is there a more efficient way to run queries then through GoDaddy's default phpMyAdmin interface?
I don't know which plan you use at GoDaddy, but you can enable remote access on all paid plans. Then, you can connect to your database using MySQL Workbench tool. I think that's better than PHPMyAdmin.
Another solution is execute the query using PHP (and maybe split up in multiple queries). You can host a PHP script directly on your GoDaddy server.
Anyway, storing JSON files as part of a full-text field is not a good idea. You may read few articles about database normalization. See also this other question.
Related
I have a query like Select '16453842' AS ACCOUNT, I want to change this Account number to a dynamic one. Can anyone tell me what are some possible ways to do it?
The scenario is like, I would also like to accommodate other values as well for the ACCOUNT. So that multiple values can be used for ACCOUNT.
the Nodejs code uses that sql in below code by carrierData.sql
writeDebug({'shipment': shipment, 'carrier': carrier, 'message': 'reading sql file: ' + carrierData.sql});
fs.readFile(carrierData.sql, 'utf8', function (err, sql) {
The carrierData is coming from a json file from where the sql contains the path and name of the SQL which it is going to use. And finally the SQL file which have the query runs a query like below
SELECT 'T' AS RECORD
, '16453842' AS ACCOUNT
and here lies my problem as we have some additional ACCOUNT numbers as well which we would like to accommodate.
And the service starts by node server.js which will call the workers.js file which contains the code that I pasted above.
So please let me know what can be the possible ways to do this
Here are things to research:
Using bind variables. These are used for security and for performance when a statement is executed multiple times.
Using multiple values in IN clauses.
Using executeMany() if you are loading data into the database.
library : https://github.com/Weakky/ra-data-opencrud/
I've been struggling since 5 days and not able to fixed. I'm using prisma with mysql
if tables in database was like this User, Post . it will work fine
the issue is all tables are named like this am_user , am_post
in this library they used this
${pluralize(camelCase(resource.name))};
who is the hero who can save me? i'm not able to find any workaround
I am not familiar with Prisma but this question is labelled react-admin... do you have no control over your schema type definitions, react admin would only care how your schema is named i am thinking not your db tables... and there must be a way to alias the Post to target ma_post for prisma .. I had to do the same thing but i used sequelize and the functionality for aliasing is quite simple
It sounds to me like you didn't create your database and are attempting to use it with an existing database? I tried this and could not get Prisma to run, that functionality was experimental when i tried it, and i believe it still is and the introspection works for postgres only... however something like this is what i was talking about from a quick google search of prisma docs (This is for postgres)
type Bill #pgTable(name: "ma_bill") {
bill: String!
id: Int! #unique
bill_products: [Bill_product]
}
This link seems to say introspection is possible for mysql tho, check it out
https://www.npmjs.com/package/prisma-db-introspection
I am trying to implement several Azure Logic Apps that query/update an Azure SQL Server Database. The queries return either one value or a table with several rows. I prefer not to create stored procedures, but instead use the 'Execute SQL Query' Connector. My queries are running fine in the Logic Apps, but I have not found a way to extract the output of the queries to use in next steps, or return in an HTTP Response.
Can someone guide me on how this can be done for both single-value and table outputs?
If for some reason you don't want to create a SP, or cannot do it, you can access your custom query results by using this in your JSON:
#body('Name_of_Execute_SQL_Query_step')?['resultsets']['Table1'][0]['NameOfYourColumn']
If you can't find the exact "path" for your data, run and let it fail. Then go check the failing step and there in "Show raw outputs" you will be able to see the results of the Execute SQL Query step. For example:
{
"OutputParameters": {},
"ResultSets": {
"Table1": [
{
"Date": "2018-05-28T00:00:00"
}
]
}
}
To access that date, you'd of course need to use:
#body('Name_of_Execute_SQL_Query_step')?['resultsets']['Table1'][0]['Date']
Stored Procedures are always better for many reasons and the output can be reasonable well inferred by the Connector. That's why Stored Procedure output lights up in the designer.
Execute SQL Actions return 'untyped' content which is why you don't see specific elements in the designer.
To use the Execute SQL output like a Stored Procedure output, you would have to define the JSON Schema yourself, and use the Parse JSON Action to light up the SQL output.
I used R to create a SQLite database and now I want to read from it (in parallel from multiple cores that have access to the same sqlite DB) multiple times and write to another DB multiple times, ~ 1,000 or more times in parallel. However, when I try doing such operation I get the following error:
sqliteFetch(rs, n = -1, ...) :
RSQLite driver: (RS_SQLite_fetch: failed first step: database is locked)
In my script I am running the following two commands that I think are giving the error (not sure if it comes from the reading or the writing):
dbGetQuery(db1, sql.query)
# later on...
if(dbExistsTable(db2, table.name){
dbWriteTable(db2, table.name, my.df, append = T)
} else {
dbWriteTable(db2, table.name, my.df)
}
Do you know if such operation is possible? If so, anyway to do it and avoid such error? I asked this question before and I was refered to ACID design of Databases, which make me think such operation should be possible but somehow it's not working.
I am also open to suggestions, such as oh you can use MySQL to do that, should work better, etc.
Thanks!
I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH