Clickhouse Unable to attach part because of file permission - backup

I'm having some problem backing up partitions and loading them to another instance.
I've done the following
ALTER TABLE Test FREEZE PARTITION 201912
took the generated partition from shadow and moved it to another instance under detached folder ,
and then ran
ALTER TABLE Test ATTACH PARTITION 201912
and it failed for "Access to file denied /....attaching_2019../
which is odd because I set the entire lib with 777 permission and it's happening , any idea?
attaching the error
2019.12.16 08:28:55.541817 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> executeQuery: (from [::ffff:127.0.0.1]:47426) ALTER TABLE test ATTACH PARTITION 201912
2019.12.16 08:28:55.541967 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> default.test: Looking for parts for partition 201912 in detached/
2019.12.16 08:28:55.542005 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> default.test: Found part 201912_0_1_1
2019.12.16 08:28:55.542020 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> default.test: 1 of them are active
2019.12.16 08:28:55.542053 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> default.test: Checking parts
2019.12.16 08:28:55.542061 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> default.test: Checking part attaching_201912_0_1_1
2019.12.16 08:28:55.543951 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Trace> default.test: Renaming temporary part detached/attaching_201912_0_1_1 to 201912_6_6_0.
2019.12.16 08:28:55.544832 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2019.12.16 08:28:55.544864 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Error> executeQuery: Poco::Exception. Code: 1000, e.code() = 1, e.displayText() = Access to file denied: insufficient permissions: /data/clickhouse/data/data/default/test/detached/attaching_201912_0_1_1 (version 19.16.3.6 (official build) (from [::ffff:127.0.0.1]:47426) (in query: ALTER TABLE test ATTACH PARTITION 201912 )
2019.12.16 08:28:55.544954 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2019.12.16 08:28:55.544972 [ 65 ] {b62d9630-7ab2-48a4-89a5-47d296151802} <Information> TCPHandler: Processed in 0.003 sec

chown -R clickhouse /var/lib/clickhouse
This fixed the issue for me

Related

Azure Data Factory: ErrorCode=TypeConversionFailure,Exception occurred when converting value : ErrorCode: 2200

Can somoene let me know why Azure Data Factory is trying to convert a value from String to type Double.
I am getting the error:
{
"errorCode": "2200",
"message": "ErrorCode=TypeConversionFailure,Exception occurred when converting value '+44 07878 44444' for column name 'telephone2' from type 'String' (precision:255, scale:255) to type 'Double' (precision:15, scale:255). Additional info: Input string was not in a correct format.",
"failureType": "UserError",
"target": "Copy Table to EnrDB",
"details": [
{
"errorCode": 0,
"message": "'Type=System.FormatException,Message=Input string was not in a correct format.,Source=mscorlib,'",
"details": []
}
]
}
My Sink looks like the following:
I don't have any mapping set
The column setting for the the field 'telephone2' is as follows:
I changed the 'table option' to none, however I got the following error:
{
"errorCode": "2200",
"message": "Failure happened on 'Source' side. ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed with the following error: 'Internal system error occurred.\r\nStatement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.',Source=,''Type=System.Data.SqlClient.SqlException,Message=Internal system error occurred.\r\nStatement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.,Source=.Net SqlClient Data Provider,SqlErrorNumber=75000,Class=17,ErrorCode=-2146232060,State=1,Errors=[{Class=17,Number=75000,State=1,Message=Internal system error occurred.,},{Class=0,Number=15885,State=1,Message=Statement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.,},],'",
"failureType": "UserError",
"target": "Copy Table to EnrDB",
"details": []
}
Any more thoughts
The issue was resolved by changing the column DataType on the database to match the DataType recorded in Azure Data Factory i.e StringType

Kusto | calculate percentage grouped by 2 columns

I have a result set that look something similar to the table below and I extended with Percentage like so:
datatable (Code:string, App:string, Requests:long)
[
"200", "tra", 63,
"200", "api", 1036,
"302", "web", 12,
"200", "web", 219,
"500", "web", 2,
"404", "api", 18
]
| as T
| extend Percentage = round(100.0 * Requests / toscalar(T | summarize sum(Requests)), 2)
The problem is I really want the percentage to be calculated from the total of Requests of the Code by App rather than the grand total.
For example, for the App "api" where Code is "200", instead of 76.74% of the total, I want to express it as a percentage of just the "api" Code values, which would be 98.29% of the total Requests for App "api".
I haven't really tried anything that would be considered valid syntax. Any help much appreciated.
you can use the join or lookup operators:
datatable (Code:string, App:string, Requests:long)
[
"200", "tra", 63,
"200", "api", 1036,
"302", "web", 12,
"200", "web", 219,
"500", "web", 2,
"404", "api", 18
]
| as T
| lookup ( T | summarize sum(Requests) by App ) on App
| extend Percentage = round(100.0 * Requests / sum_Requests, 2)
| project Code, App, Requests, Percentage
Code
App
Requests
Percentage
200
api
1036
98.29
404
api
18
1.71
200
tra
63
100
302
web
12
5.15
200
web
219
93.99
500
web
2
0.86

LibreOffice: Identifying 'Named Destinations'

I am working on an application that can open and display a PDF page using Poppler. I understand that Named Destinations are the right way to go about in order to open particular pages and in specific show an area within the page.
I figured it is possible to export headings and bookmarks in the PDF file by enabling Export outlines as named destinations option. However the names of these destinations look like below.
13 [ XYZ 96 726 null ] "5F5FRefHeading5F5F5FToc178915F2378596536"
14 [ XYZ 92 688 null ] "5F5FRefHeading5F5F5FToc179995F2378596536"
14 [ XYZ 92 655 null ] "5F5FRefHeading5F5F5FToc180015F2378596536"
14 [ XYZ 92 622 null ] "5F5FRefHeading5F5F5FToc187075F2378596536"
14 [ XYZ 92 721 null ] "5F5FRefHeading5F5F5FToc187095F2378596536"
There is no way to identify which heading is mapped to which destination. Page numbers are there but if there are multiple headings on the same page it would again take trial and error to identify the right one.
Questions
Is there any way in LibreOffice (writer) to find out what WILL BE the name of the destination once exported? Adobe Acrobat or PDF Studio Viewer have options to navigate through the list of destinations and 'see where they go'. To the best of my knowledge the navigation pane in LibreOffice does not show destination names.
Is there a guarantee that the names are maintained unique irrespective of any sections (headings) or pages that may get inserted before them?
I understand that LibreOffice uses 5F in place of _ because they are not allowed in PDF bookmarks. So if I replace those I am left with,
13 [ XYZ 96 726 null ] "__RefHeading___Toc17891_2378596536"
14 [ XYZ 92 688 null ] "__RefHeading___Toc17999_2378596536"
14 [ XYZ 92 655 null ] "__RefHeading___Toc18001_2378596536"
14 [ XYZ 92 622 null ] "__RefHeading___Toc18707_2378596536"
14 [ XYZ 92 721 null ] "__RefHeading___Toc18709_2378596536"
15 [ XYZ 96 726 null ] "__RefHeading___Toc18492_2378596536"
Decoding further the prefix (RefHeading) tells that the destination is from heading and the suffix (2378596536) is probably a unique number identifying the entire document (since it is the same for all entries). The middle portion appears to be a unique key however I am unable to identify the heading (or its section number) from this part.

How do I create a randomly distributed boolean variable for a <breed> that will change in the model?

I am writing a model with two breeds:
sexworkers and officers
where sexworkers possess a boolean variable that is randomly distributed at the setup, but then changes at the go according to the behavior of and interaction with officers.
I use sexworkers-own [ trust? ]
in the preamble, but then I am not sure how to distribute y/n of the variable randomly across the sexworkers population. Really appreciate any input!
Thank you so much!
If I understand your question correctly, you're just wanting sexworkers to randomly choose between true and false for the trust? variable on setup. If that's right, then maybe one-of will do the trick for you- for an example, run this simple setup:
breed [ sexworkers sexworker ]
sexworkers-own [ trust? ]
to setup
ca
create-sexworkers 1000 [
set trust? one-of [ true false ]
]
print word "% Trusting: " ( ( count sexworkers with [ trust? ] ) /
count sexworkers * 100 )
reset-ticks
end
If you're looking for some kind of uneven distribution you can do simple ones using the random or random-float primitives. For example, if I want 25% of the sexworkers to start with trust? = true, I can do something like:
to setup-2
ca
create-sexworkers 1000 [
ifelse random-float 1 < 0.25 [
set trust? true
] [
set trust? false
]
]
print word "% Trusting: " ( ( count sexworkers with [ trust? ] ) /
count sexworkers * 100 )
reset-ticks
end
For specific distributions, have a look at the various random reporters
For weighted randomness, have a look at the rnd extension

Format of 64 bit symbol table entry in ELF

So I am trying to learn about the ELF by taking a close look how everything relates and can't understand why the symbol table entries are the size they are.
When I run readelf -W -S tiny.o I get:
Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 0] NULL 0000000000000000 000000 000000 00 0 0 0
[ 1] .bss NOBITS 0000000000000000 000200 000001 00 WA 0 0 4
[ 2] .text PROGBITS 0000000000000000 000200 00002a 00 AX 0 0 16
[ 3] .shstrtab STRTAB 0000000000000000 000230 000031 00 0 0 1
[ 4] .symtab SYMTAB 0000000000000000 000270 000090 18 5 5 4
[ 5] .strtab STRTAB 0000000000000000 000300 000015 00 0 0 1
[ 6] .rela.text RELA 0000000000000000 000320 000030 18 4 2 4
Which shows the symbol table having 0x18 or 24 bytes per entry and a total size of (0x300-0x270) or 0x90 giving us 6 entries.
This matches with what readelf -W -s tiny.o says:
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FILE LOCAL DEFAULT ABS tiny.asm
2: 0000000000000000 0 SECTION LOCAL DEFAULT 1
3: 0000000000000000 0 SECTION LOCAL DEFAULT 2
4: 0000000000000000 0 NOTYPE LOCAL DEFAULT 1 str
5: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 2 _start
So clearly the 24 bytes size is correct, but that would correspond to a 32 bit table entry as decribed in this 32 bit spec.
Given that I am on a 64 bit system and the ELF file is 64 bit I would expect the entry to be as decribed in this 64 bit spec.
Upon looking at a hex dump of the file, I found that the layout of the fields in the file seems to be according to this 64 bit pattern.
So then why is the ELF file seemingly using undersized symbol table entries despite using the 64 bit layout and being a 64 bit file?
So then why is the ELF file seemingly using undersized symbol table entries
What makes you believe they are undersized?
In Elf64_Sym, we have:
int st_name
char st_info
char st_other
short st_shndx
<--- 8 bytes
long st_value
<--- 8 bytes
long st_size
<--- 8 bytes.
That's 24 bytes total, exactly as you'd expect.
To convince yourself that everything is in order, compile this program:
#include <elf.h>
#include <stdio.h>
int main()
{
Elf64_Sym s64;
Elf32_Sym s32;
printf("%zu %zu\n", sizeof(s32), sizeof(s64));
return 0;
}
Running it produces 16 24. You can also run it under GDB, and look at offsets of various fields, e.g.
(gdb) p (char*)&s64.st_value - (char*)&s64
$1 = 8
(gdb) p (char*)&s64.st_size - (char*)&s64
$2 = 16