CockroachDB: Simple SQL queries do not respond - sql

I have setup CockroachDB according to the instructions at https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-on-premises-insecure.html. However, SQL queries take too long time and do not respond.
The procedure of setupping cluster is as follows. I used three machines running Ubuntu 19.10.
At node1
cockroach start --insecure --advertise-addr=${NODE1} --join=${NODE1},${NODE2},${NODE3} --cache=.25 --max-sql-memory=.25 --background
At node2
cockroach start --insecure --advertise-addr=${NODE2} --join=${NODE1},${NODE2},${NODE3} --cache=.25 --max-sql-memory=.25 --background
At node3
cockroach start --insecure --advertise-addr=${NODE3} --join=${NODE1},${NODE2},${NODE3} --cache=.25 --max-sql-memory=.25 --background
Here, ${NODEi} stands for the address of each node.
Then, I initialized the cluster.
cockroach init --insecure --host=${NODE1}
After that, I went into SQL shell and typed a query.
#
# Welcome to the CockroachDB SQL shell.
# All statements must be terminated by a semicolon.
# To exit, type: \q.
#
# Server version: CockroachDB CCL v20.1.1 (x86_64-unknown-linux-gnu, built 2020/05/19 14:46:06, go1.13.9) (same version as client)
# Cluster ID: 77cf3b29-f895-45ab-9592-7956a3effdb7
#
# Enter \? for a brief introduction.
#
root#192.168.10.131:26257/defaultdb> CREATE DATABASE bank;
The command CREATE DATABASE bank took more than one minute and seemed not to work. But when I try again later, the same command finished within a second.
The status of cluster is as follows:
id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live
-----+----------------------+----------------------+---------+----------------------------------+----------------------------------+----------+--------------+----------
1 | 192.168.10.131:26257 | 192.168.10.131:26257 | v20.1.1 | 2020-05-28 05:00:17.725807+00:00 | 2020-05-28 05:26:01.338089+00:00 | | true | true
2 | 192.168.10.132:26257 | 192.168.10.132:26257 | v20.1.1 | 2020-05-28 05:00:18.574806+00:00 | 2020-05-28 05:26:02.121931+00:00 | | true | true
3 | 192.168.10.133:26257 | 192.168.10.133:26257 | v20.1.1 | 2020-05-28 05:00:18.729008+00:00 | 2020-05-28 05:26:02.253278+00:00 | | true | true
(3 rows)
Do you have any ideas to solve this problem?

This doesn't look expected, perhaps it's related to the network topology of your cluster or the clusters resources at the time of the statement.
If this is still an issue there are two things we could try:
a) Upgrading to the latest stable version of 20.2 which introduces a ton of stability and performance upgrades.
b) Collect a debug.zip and you could submit a ticket to www.support.cockroachlabs.com for the technical support team to check out.
I recommend upgrading to the latest stable version of v20.1 or v20.2 and giving that a shot though!

Related

Presto API to get active workers

I would like to use Presto API to get number of active workers, similar to the info available in PrestoUI.
I want to use the an API similar to (who don't contain this info):
https://presto/v1/status
https://presto/v1/jmx
AFAIK in latest Trino (formerly Presto SQL) versions the workers cannot be introspected from outside of the cluster, but you can get the listing with SQL:
presto> SELECT * FROM system.runtime.nodes;
node_id | http_uri | node_version | coordinator | state
---------------+------------------------+------------------+-------------+--------
presto-worker | http://172.20.0.3:8081 | 347-137-g4945abe | false | active
presto-master | http://172.20.0.4:8080 | 347-137-g4945abe | true | active
(2 rows)

Using In-band commissioning with SAMR21s ftd cli returns Join failed [InvalidArgs]

Im trying the guide at https://codelabs.developers.google.com/codelabs/openthread-hardware/#8, with two SAMR21s running the SAMR21 ftd cli examples. Everything seems to work until I need to do joiner start J01NME, which returns with Join failed [InvalidArgs].
Fresh upload of SAMR21 ftd cli examples with parameters COMMISSIONER=1 JOINER=1 DHCP6_CLIENT=1 DHCP6_SERVER=1 on both SAMR21s.
With the two SAMR21s I typed the following commands into the clis:
SAMR21 1:
> dataset init new
> dataset commit active
> ifconfig up
> thread start
> state
leader
> commissioner start
Commissioner: petitioning
done
> commissioner joiner add 0004251918018576 J01NME
done
SAMR21 2:
> eui64
0004251918018576
> ifconfig up
>scan
| J | Network Name | Extended PAN | PAN | MAC Address | Ch | dBm | LQI |
| 1 | OpenThread-f171 | 1b6239e953fd2be4 | f171 | 76144ebb984c039a | 0 | -7 | 0 |
> joiner start J01NME
done
> Join failed [InvalidArgs]
Observations
I have noticed when the commissioner times out the follow is displayed which doesnt seem right, as the eui64 being removed doesnt match what i added:
> commissioner joiner add 0004251918018576 J01NME
Done
> Commissioner: Joiner remove b34a468958787c5e
Any help would be appreciated thanks.

Using table inside of dynamic step definitions

I have feature file with step like this
When user login with credentials
|username|password|
|blahblah|blahblah|
But then I want to use this step inside my dynamic step definitions. Is it possible? I was trying to do something like this
step 'user login with credentials
|username|password|
|blahblah|blahblah|'
or creating and passing a hash. Couldn't make it work so far.
https://github.com/cucumber/cucumber/wiki/Calling-Steps-from-Step-Definitions#calling-steps-with-multiline-step-arguments
Calling steps with multiline step arguments
Sometimes you want to call a step that has been designed to take Multiline Step Arguments, for example:
# ruby
Given /^an expense report for (.*) with the following posts:$/ do |date, posts_table|
# The posts_table variable is an instance of Cucumber::Ast::Table
end
This can easily be called from a plain text step like this:
# feature
Given an expense report for Jan 2009 with the following posts:
| account | description | amount |
| INT-100 | Taxi | 114 |
| CUC-101 | Peeler | 22 |
But what if you want to call this from a step definition? There are a couple of ways to do this:
# ruby
Given /A simple expense report/ do
step "an expense report for Jan 2009 with the following posts:", table(%{
| account | description | amount |
| INT-100 | Taxi | 114 |
| CUC-101 | Peeler | 22 |
})
end
Or, if you prefer a more programmatic approach:
# ruby
Given /A simple expense report/ do
step "an expense report for Jan 2009 with the following posts:", table([
%w{ account description amount },
%w{ INT-100 Taxi 114 },
%w{ CUC-101 Peeler 22 }
])
end

Dynamic creation of Table type

I have a column table with a single column.
I would like to create a table type with all the elements in the column of the above mentioned table as column names with fixed datatype and size and use it in a function.
similarly like below:
Dynamic creation of table in tsql
Any suggestions would be appreciated.
EDIT:
To finish a product, a machine has to perform different Jobs on the material with different tools.
I have a list of Jobs a machine can perform and a list of Tools. a specific tool for a specific Job.
Each job needs a specific tool and number of hours (to change the tool once it reached its change time). A Job can be performed many times on a product. (in this case if a Job is performed for 1 hour = tool has been used for 1 hour)
For each product, a set of tools will be at work in a sequence. so I Need a report for each product, number of hours the tool has worked.
EDIT 2:
Product table
---------+-----+
ProductID|Jobs |
---------+-----+
1 | job1 |
1 | job2 |
1 | job3 |
1 | . |
1 | . |
1 |100th |
2 | job1 |
2 | . |
2 | . |
2 |200th |
Jobs table
-------+-------+-------
Jobs | tool | time
-------+-------+-------
job1 |tool 10| 2
job1 |tool 09| 1
job2 |tool 11| 4
job3 |tool 17| 0.5
required report (this table does not physically exist)
----------+------+------+------+------+------+-----
productID | job1 | job2 | job3 | job4 | job5 | . . .
----------+------+------+------+------+------+------
1 | 20 | 10 | 5 | . | . | .
----------+------+------+------+------+------+------
2 | 10 | 13 | 5 | . | . | .
----------+------+------+------+------+------+------
Based on the added information, there are two main requirements here:
You want to sum up the time spent for producing each product grouped by the jobs involved
and
You want to have a cross-table report showing the times from step 1 against products and jobs.
For the first bit, you probably could do this with a query like this:
SELECT
p.product_id,
j.jobs,
SUM(j.time) as SUM_TIME
FROM
products p
INNER JOIN jobs j
ON p.jobs = j.jobs
GROUP BY
p.product_id,
j.jobs;
For the second part: this is usually called a PIVOT report.
SAP HANA does not provide a dynamic SQL command for generating output in this form (other DBMS have that).
However, this dynamic transformation is usually relevant for the data presentation and not so much for the processing.
So, as you probably want to use some form of front end for this report (e.g. MS Excel, Crystal Reports, Business Objects X, Tableau, ...) I would recommend doing the transformation and formatting in the frontend report. Look for "PIVOT" or "CROSSTAB" options to do that.

Creating an SSIS job to split a column and insert into database

I have a column called Description:
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Description/Title |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Liszt, Hungarian Rhapsody #6 {'Pesther Carneval'}; 2 Episodes from Lenau's 'Faust'; 'Hunnenschlacht' Symphonic Poem. (NW German Phil./ Kulka) |
| Beethoven, Piano Sonatas 8, 23 & 26. (Justus Frantz) |
| Puccini, Verdi, Gounod, Bizet: Arias & Duets from Butterfly, Tosca, Boheme, Turandot, I Vespri, Faust, Carmen. (Fiamma Izzo d'Amico & Peter Dvorsky w.Berlin Radio Symph./Paternostro) |
| Puccini, Ponchielli, Bizet, Tchaikovsky, Donizetti, Verdi: Arias from Boheme, Manon Lescaut, Tosca, Gioconda, Carmen, Eugen Onegin, Favorita, Rigoletto, Luisa Miller, Ballo, Aida. (Peter Dvorsky, ten. w.Hungarian State Opera Orch./ Mihaly) |
| Thomas, Leslie: 'The Virgin Soldiers' (Hywel Bennett reads abridged version. Listening time app. 2 hrs. 45 mins. DOLBY) |
| Katalsky, A. {1856-1926}: Liturgy for A Cappella Chorus. Rachmaninov, 6 Choral Songs w.Piano. (Bolshoi Theater Children's Choir/ Zabornok. DOLBY) |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Please note that above I'm only showing 1 field.
Also, the output that I would like is:
+-------+-------+
| Word | Count |
+-------+-------+
| Arias | 3 |
| Duets | 2 |
| Liszt | 10 |
| Tosca | 1 |
+-------+-------+
I want this output to encompass EVERY record. I do not want a separate one of these for each record, just one global one.
I am choosing to use SSIS to do this job. I'd like your input on which controls to use to help with this task:
I'm not looking for a solution, but simply some direction on how to get started with this. I understand this can be done many different ways, but I cannot seem to think of a way to do this most efficiently. Thank you for any guidance.
FYI:
This script does an excellent job of concatenating everything:
select description + ', ' as 'data()'
from [BroincInventory]
for xml path('')
But I need guidance on how to work with this result to create the required output. How can this be done with c# or with one of the SSIS components?
edit: As siyual points out below I need a script task. The script above obviously will not work since there's a limit to the size of a data point.
I think term extraction might be the component you are looking for. Check this out: http://www.mssqltips.com/sqlservertip/3194/simple-text-mining-with-the-ssis-term-extraction-component/