How to fetch distinct single column value from gemfire region without where clause? - gemfire

I am new to Gemfire and trying to bring distinct single column values from Gemfire. I was trying like below, but its giving me error -
select transactionStatus from /Transaction limit 1000
Error:
Failed to execute query, causing nullpointerException

Related

select bottleneck and insert into select doesn't work on cockroach db

I have to union 2 tables like below query. and 'table2' has 15GB data. But it show errors. I set max-sql-memory=.80 and I don't know how to solve this.
When I execute this query with limit 50000 option, it works!
Even 'select * from table2' shows same error.
I think there are a select bottleneck somehow....
Also, with this query it is unusual only 1 of 3nodes's latency goes up. (AWS EC2 i3.xlarge type)
▶ Query
insert into table1 (
InvoiceID, PayerAccountId, LinkedAccountId, RecordType, RecordId, ProductName
)
select
InvoiceID, PayerAccountId, LinkedAccountId, RecordType, RecordId, ProductName
from table2;
▶ Error :
driver: bad connection
warning: connection lost!
opening new connection: all session settings will be lost
▶ Log :
W180919 04:59:20.452985 186 storage/raft_transport.go:465 [n3] raft transport stream to node 2 failed: rpc error: code = Unavailable desc = transport is closing
W180919 04:59:20.452996 190 vendor/google.golang.org/grpc/clientconn.go:1158 grpc: addrConn.createTransport failed to connect to {10.240.98.xxx:26257 0 }. Err :connection error: desc = "transport: Error while dialing cannot reuse client connection". Reconnecting...
If I'm understanding your question correctly, you're using a single statement to read ~15GB of data from table2 and insert it into table1. Unfortunately, as you've discovered this won't work. See limits for a single statement which covers exactly this scenario. Setting --max-sql-memory=.80 will not help and most likely will hurt as CockroachDB needs some breathing room as our memory tracking is not precise. The "bad connection warning" and the error you found in the logs are both symptoms which occur when a Cockroach process has crashed, presumably due to running out of memory.
If you need to copy the data from table2 to table1 transactionally then you're a bit out of luck at this time. While you could try using an explicit transaction and breaking the single INSERT statement into multiple statements, you'll very likely run into transaction size limits. If you can handle performing the copy non-transactionally then I'd suggest breaking the INSERT into pieces. Something along the lines of:
INSERT INTO table1 (...)
SELECT ... FROM table2 WHERE InvoiceID > $1 LIMIT 10000
RETURNING InvoiceID
The idea here is that you copy in 10k row batches. You would use the RETURNING InvoiceID clause to track the last InvoiceID that was copied and start the next insert from there.

Ms Access Error - The field is too small to accept the amount of data you attempted to add

I am trying to build a select query in ms access, however am getting the error message: "The field is too small to accept the amount of data you attempted to add."
The sql is :
SELECT ProjectList.ProjectID, UpcomingOpenMilestone.MinOfPlannedDate,
UpcomingOpenMilestone.FirstOfMilestone
FROM UpcomingOpenMilestone RIGHT JOIN ProjectList ON
UpcomingOpenMilestone.ProjectID = ProjectList.ProjectID;
The query is pulling from one other query and a table. The ProjectList.ProjectID field is an autonumber field.
The UpcomingOpenMilestone qry is generated from one table and the sql is as follows:
SELECT MilestoneTracking.ProjectID, Min(MilestoneTracking.PlannedDate) AS
MinOfPlannedDate, First(MilestoneTracking.Milestone) AS FirstOfMilestone
FROM MilestoneTracking
GROUP BY MilestoneTracking.ProjectID, MilestoneTracking.ActualDate
HAVING (((MilestoneTracking.ActualDate) Is Null));
The ProjectID is the foreign key, therefore number type field, Milestone is a short text and limited to 200 characters and then PlannedDate and ActualDate are both date type fields.
I don't understand why the field would be too small, I've tried limiting the query that is generating the error to only try and pull in the ProjectID or other single fields however it does not work. The only way that it will run is if I drop the Right Join.
Any help is appreciated!

Using values from two tables to run query in Hive

I would like to run a hive query to be able to divide a column from one table by the total sum of a column from another table.
Do I have to join the tables?
The code below generates errors:
Select 100*(Num_files/total_Num_files) from jvros_p2, jvros_p3;
FAILED: Parse Error: line 1:75 mismatched input ',' expecting EOF near 'jvros_p2'
Yes, jvros_p3 is a single row single column table
Num_files is a column in jvros_p2 and total_Num_files is a single value in jvros_p3.
Your older version may be why your notation isn't working. Try this:
SELECT 100 * (Num_files / total_Num_files) FROM jvros_p2 JOIN jvros_p3;
I suspect that if you are eventually able to upgrade to at least 0.13, implicit join notation via comma-separated tables will be supported per HIVE-5558.

Forming insert into query using rimpala in R

I am trying to execute insert into query on impala table using rimpala.query() function through R but I am getting an error. The query that I am executing is:
for(x in nrow)
{
rite <- paste("INSERT INTO table1 (account_no, data_id, date_id, industry_no, sales_no, sales) VALUES (1445367,",data_frame1$data_id[x] ,",25,11346,23,", data_frame1$sales[x], ")",sep="")
sql <- rimpala.query(rite);
}
where data_frame1 is the data frame which has bunch of rows and nrow is the number of rows in data_frame1. The first insert into statement executes and fist data is inserted into database but it throws an error just after executing that as
Error in rimpala.query(sql) : SQL error Error: The query did not generate a result set!
How do I remove this error?
The error is in the RImpala client, which is using executeQuery to run all queries, even those that modify state. They should be using executeUpdate for DDL and INSERT, UPDATE, or DELETE queries. I've filed an issue upstream for you.

Query returned non-zero code: 10, cause: FAILED

all
when I use the Hive to select the id from a table there are some errors occurring as follows:
Query returned non-zero code: 10, cause: FAILED : Error in semantic analysis: Line 1:68 Invalid table alias or column reference 'Goldman'
can any body give me some questions?
Your error seems to indicate that you are doing a select for a column named Goldman that does not exist. You attempting to do an HQL query for a column goldman or you are attempting to do a query for rows in which a specific column has Goldman.