Not all data read in from SQL database to R - sql

I am creating a Shiny app and have been using data from Microsoft SQL Server Management Studio by creating my table with the query below, saving it as a CSV, and reading it in with
alldata<-read.csv(file1$datapath, fileEncoding = "UTF-8-BOM")
with the above code in my server function and the below code in my ui function
fileInput("file1", "Choose CSV File", accept=".csv")
Using this code, I have been able to manipulate the all the data (creating tables and plots) from the CSV successfully. I wanted to try directly obtaining the data from the SQL server when my app loads instead of going into SQL, executing the query, saving the data, and then loading it into my app. I tried the below code, and it sort of works. For example, the variable CODE has 30 levels, all of which are represented and able to be manipulated when I read the data in with the CSV, but only 23 are represented and manipulated when I run the below code. Is there a specific reason this may be happening. I tried running the SQL code along with the code to make my datatables in base R, instead of shiny to see if I could spot something specific not being read in correctly, but it all works perfectly when I read it in line by line
library(RODBCext)
dbhandle<-odbcDriverConnect('driver={SQL Server}; server=myserver.com;
database=mydb; trusted_connection=true')
query<-"SELECT CAST(r.DATE_COMPLETED AS DATE)[DATE]
, res.CODE
, r.TYPE
, r.LOCATION
, res.OPERATION
, res.UNIT
FROM
mydb.RECORD r
LEFT OUTER JOIN mydb.RESULT res
ON r.AMSN = res.AMSN
and r.unit = res.unit
where r.STATUS = 'C'
and res.CODE like '%ABC-%'"
auditdata<-sqlExecute(channel=dbhandle, query=query, fetch=TRUE, stringsAsFactors=FALSE)
odbcClose(dbhandle)
*I only want the complete data set loaded once per Shiny session, so I currently have this outside of the server function in my server.R file.

Try adding 'believeNRows=FALSE' to your SQLExecute call. RODBC doesn't return the full query without this parameter.
auditdata<-sqlExecute(channel=dbhandle, query=query, fetch=TRUE, stringsAsFactors=FALSE, believeNRows=FALSE)

Related

Is there a way to execute text gremlin query with PartitionStrategy

I'm looking for an implementation to run text query ex: "g.V().limit(1).toList()" while using the PatitionStrategy in Apache TinkerPop.
I'm attempting to build a REST interface to run queries on selected graph paritions only. I know how to run a raw query using Client, but I'm looking for an implementation where I can create a multi-tenant graph (https://tinkerpop.apache.org/docs/current/reference/#partitionstrategy) and query only selected tenants using raw text query instead of a GLV. Im able to query only selected partitions using pythongremlin, but there is no reference implementation I could find to run a text query on a tenant.
Here is tenant query implementation
connection = DriverRemoteConnection('ws://megamind-ws:8182/gremlin', 'g')
g = traversal().withRemote(connection)
partition = PartitionStrategy(partition_key="partition_key",
write_partition="tenant_a",
read_partitions=["tenant_a"])
partitioned_g = g.withStrategies(partition)
x = partitioned_g.V.limit(1).next() <---- query on partition only
Here is how I execute raw query on entire graph, but Im looking for implementation to run text based queries on only selected partitions.
from gremlin_python.driver import client
client = client.Client('ws://megamind-ws:8182/gremlin', 'g')
results = client.submitAsync("g.V().limit(1).toList()").result().one() <-- runs on entire graph.
print(results)
client.close()
Any suggestions appreciated? TIA
It depends on how the backend store handles text mode queries, but for the query itself, essentially you just need to use the Groovy/Java style formulation. This will work with GremlinServer and Amazon Neptune. For other backends you will need to make sure that this syntax is supported. So from Python you would use something like:
client.submit('
g.withStrategies(new PartitionStrategy(partitionKey: "_partition",
writePartition: "b",
readPartitions: ["b"])).V().count()')

Extract incident details from Service Now in Excel

I am trying to extract ticket details from Service Now. Is there a way to extract the details without ODBC ? I have also tried the solution mentioned in [1]: https://community.servicenow.com/docs/DOC-3844, but I am receiving an error 9 -subscript out of range.
Is there a better way to extract details efficiently? I tried asking this in the service now forum but I thought I might get other opinions from here.
It's been a while since this question is asked. Hopefully following is still useful.
I am extracting change data (not incident) , but the process still should be same. You will need to gather incident table and column information. Then there are couple of ways to approach the problem.
1) If the data you are extracting has fixed parameters , such as fixed period or fixed column or group etc., then you can create a report within servicenow and then use REST/SOAP API to get the data in text/csv format. You can use different python modules to convert from csv to xls or xlsx depending on you need. I used openpyXL ,csv , xlsreader ,xlswriter etc.
See here for a example
ServiceNow - How to use SOAP to download reports
2) If the data has dynmaic parameters where you need to change columns, dates or filter etc, you can still use soap / REST API but form query within python scripts instead of having static report. This way you can change it based on your requirement on the fly.
Here is an example query for DB. you can use example for above. Just switch url with following.
table_name = 'u_change_table_name' #SN DB holding change/INCIDENT info
table_limit = 800
table_query = 'active=true&sysparm_display_value=true&planned_start_date=today'
date_query = 'chg_start_date>=javascript:gs.daysAgoStart(1)^active=true^chg_type=normal'
table_fields = 'chg_number,chg_start_date,chg_duration,chg_end_date' #Actual column names from DB and not from SN report.
url= (
'https://yourcompany.service-now.com/api/now/table/' +table_name +\
'?sysparm_query=' + date_query + '&sysparm_fields=' \
+ table_fields + '&sysparm_limit=' + str(table_limit)
)

Syntax issue with Qlikview STORE INTO qvd

I'm consistently unable to store data into a QVD file. The syntax I've been using works perfectly in other applications, but for some reason I can't get it to work in the application I'm trying to finish.
I've successfully developed a data model, now need to loop through by years, reduce the data for just the specific years, and then save that out to individual QVD files.
Here's the code:
FOR i = $(vL.MinFY) TO $(vL.MaxFY)
LET _table = 'T_FACT_$(i)_GL_BALANCES';
[$(_table)]:
LOAD
*
RESIDENT [FINAL BALANCES]
WHERE LEFT([Year-Period],4) = $(i)
;
IF NoOfRows('$(_table)') > 0 THEN
STORE $(_table) INTO [$(vG.TransformQVDPath)$(_table).QVD];
END IF
LET vRowCount_$(i) = NoOfRows('$(_table)');
DROP TABLE [$(_table)];
NEXT
The script burps on the DROP TABLE statement, and nothing is saved to QVD. I've tried various combinations of dollar-sign expansion, with and without quotes, brackets, etc.
Hope someone can help me determine what I'm missing.
DOH! Figured it out.
The LOAD statement requires a NOCONCATENATE, or else each iteration of the table is simply concatenated with the source table. Updated script for the load section is:
[$(_table)]:
NOCONCATENATE LOAD
*
RESIDENT [FINAL BALANCES]
WHERE LEFT([Year-Period],4) = $(i)
;
After doing that everything worked as expected.

Reading SQLite Database with ODBC Provider : Error with fields with float values

Here is the context
I'm building a library in VB.net able to quickly handle SQL Database datas, in windows form based front end. I'm using ADODB Connections and recordsets.
I managed to link front end to Access databases, MS SQL server, MySQL and, recently I'm working on SQLite databases to provide quick and portables SQL providers.
Here is the problem
As I understand it, SQLite stores single/double/float with IEEE-7 standards, and for exemple a stored value of 9.95 will be read as 9,94999980926514.
So when I load the record again, I can edit it (other fields), and update it. but if I try to edit the float value (lets say 9,94999980926514 > 10) then update it, then I've got an error see sample code
Dim LocRs as ADODB.Recordset
LocRs.Open("SELECT ID_Montant,Mont_Value,Mont_Date FROM tMontants",SQLConnection)
LocRs.addNew
LocRs.("ID_Montant").Value =666
LocRs.("Mont_Value").Value =9.95
LocRs.("Mont_Date").Value =Date.Today
LocRs.Update
LocRs.close
'No Problems'
LocRs.open("SELECT ID_Montant,Mont_Value,Mont_Date FROM tMontants WHERE ID_Montant=666",SQLConnection)
LocRs.Mont_Date.Value=Date.Today.AddDays(-2)
Console.WriteLine(LocRs.("Mont_Value").Value) 'Returns 9,94999980926514'
LocRs.Update
LocRs.close
'No Problem again'
LocRs.open("SELECT ID_Montant,Mont_Value,Mont_Date FROM tMontants WHERE ID_Montant=666",SQLConnection)
LocRs.("Mont_Value").Value=10
LocRs.Update
'Error : Row cannot be located for updating. Some values may have been changed since it was last read. Help Code -2147217864'
The error code seems to be of little help.
I'm using
locRs.LockType = LockTypeEnum.adLockOptimistic
locRs.CursorType = CursorTypeEnum.adOpenKeyset
locRs.CursorLocation = CursorLocationEnum.adUseClient
But I tried a lot of combination without success.
As for the provider I'm using Werner ODBC Provider
I Had a similar problem with Date fields.
When editing a field with an incomplete date like 2012-03-01 12:00:00 instead of 2012-03-01 12:00:00.000, But I corrected the problem by formating the date in it's complete form.
Now I'm stuck with this because the SQLite database stores approximates float value whether I rounded it or not before storing it with an update.
I'm guessing it's a provider problem while computing the update because I can do
"UPDATE tMontants SET Mont_Value = 10 WHERE ID_Montant = 666"
Without any problem
But I'd really like to force the recordset to work in order to integrate SQLlite solutions to every software I've deployed so far.
Thanks for your time.

Export Data from SQL to CSV

I'm using EntityFramework to access a sql server to return data. The data needs to be formatted into a tab delimited file. I then want to compress the data to return to the user.
I can do the select, and then iterate over the EF objects and format all the data into one big string- but this takes forever (I'm returning abouit 800k rows). The query itself is quite fast, but its just the creating of the csv file in memory that is killing it.
I found this post that describes how to use sqlcmd to do this directly as an export (but with csv) with sql which seems very promising, but I'm unclear how to pass the -E and other parameters to ExecuteSqlCommand()... or if it is even meant for this.
I tried to do something like this:
var test = context.Database.ExecuteSqlCommand("select Chromosome c,
StartLocation sl, Endlocation el, GeneName gn from Gencode where c = chr1",
"-E", "-Q", new SqlParameter("-s", "\t"));
But of course that didn't work...
Any suggestions as to how to go about this? I'm using EF 6.1 if that matters.
Alternate option using simple method.
F5-->store result--> keep file name