How to connect hive with cloudera's hue? - hive

I install cloudera's hue and start server.
but i got an error on website(127.0.0.1:8000/hue/editor/?type=hive) like below:
TSocket read 0 bytes (code THRIFTTRANSPORT): TTransportException('TSocket read 0 bytes',)
I checked the log and can see log like below:
QueryError: TSocket read 0 bytes (code THRIFTTRANSPORT): TTransportException('TSocket read 0 bytes',)
I use hiveserver2 and transportmode is binary.
How can i solve this problem??
ADD. I use NOSASL in hive-site.xml
ADD. My beeswax server setting like below:
{
"dialect": "beeswax",
"has_session_pool": False,
"server_name": "beeswax",
"transport_mode": "socket",
"auth_username": "hue",
"server_host": "localhost",
"server_port": 10000,
"use_sasl": True,
"auth_password_used": False,
"http_url": "http://localhost:10001/cliservice",
"max_number_of_sessions": 1,
"close_sessions": False,
"principal": None
}

I discovered myself.
As a result of continuous web surfing, I found a directory and file where hue could be set. In fact, I could find many articles asking me to modify hue's setting file before, but no one told me which directory the setting file exists in. Eventually, I found out the location of the setting file through an article, and after unannotating "use_sasl" in the setting file, I modified the NOSASL of save-site.xml to NONE. This worked well! 😁

Related

KIBANA - WAZUH pattern index

I have a project to install wazuh as FIM on linux, AIX and windows.
I managed to install Manager and all agents on all systems and I can see all three connected on the Kibana web as agents.
I created test file on the linux agent and I can find it also on web interface, so servers are connected.
Here is test file found in wazuh inventory tab
But, I am not recieving any logs if I modify this test file.
This is my settings in ossec.conf under syscheck on agent server>
<directories>/var/ossec/etc/test</directories>
<directories report_changes="yes" check_all="yes" realtime="yes">/var/ossec/etc/test</directories>
And now I ma also strugling to understand meanings of index patterns, index templates and fields.
I dont understand what they are and why we need to set it.
My settings on manager server - /usr/share/kibana/data/wazuh/config/wazuh.yml
alerts.sample.prefix: 'wazuh-alerts-*'
pattern: 'wazuh-alerts-*'
On the kibana web I also have this error when I am trying to check ,,events,, -the are no logs in the events.
Error: The field "timestamp" associated with this object no longer exists in the index pattern. Please use another field.
at FieldParamType.config.write.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:627309)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455052
at Array.forEach (<anonymous>)
at writeParams (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455018)
at AggConfig.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355081)
at AggConfig.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355960)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:190748
at Array.forEach (<anonymous>)
at agg_configs_AggConfigs.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:189329)
at http://MYIP:5601/42959/bundles/plugin/wazuh/4.2.5-4206-1/wazuh.chunk.6.js:55:1397640
Thank you.
About FIM:
here you can find the FIM documentation in case you don't have it:
https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/fim-configuration.html
https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html.
The first requirement for this to work would be to ensure a FIM alert is triggered, could you check the alerts.json file on your manager? It is usually located under /var/ossec/logs/alerts/alerts.json In order to test this fully I would run "tail -f /var/ossec/logs/alerts/alerts.json" and make a change in yout directory , if no alerts is generated, then we will need to check the agent configuration.
About indexing:
Here you can find some documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html
https://www.elastic.co/guide/en/kibana/current/managing-index-patterns.html#scripted-fields
https://documentation.wazuh.com/current/user-manual/kibana-app/reference/elasticsearch.html
Regarding your error, The best way to solve this is to delete the index. To do this:
got to Kibana -> Stack management -> index patterns and there delete wazuh-alerts-*.
Then if you enter to Wazuh App the health check will create it again or you can follow this to create your index:
Go to kibana -> stack management -> index pattern and select Create index pattern.
Hope this information helps you.
Regards.
thank you for your answer.
I managed to step over this issue, but I hit another error.
When I check tail -f /var/ossec/logs/alerts/alerts.json I got never ending updating, thousands lines with errors like.
{"timestamp":"2022-01-31T12:40:08.458+0100","rule":{"level":5,"description":"Systemd: Service has entered a failed state, and likely has not started.","id":"40703","firedtimes":7420,"mail":false,"groups":["local","systemd"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"X.X.X.X"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629208.66501653","full_log":"Jan 31 12:40:07 MYAGENTSERVERNAME systemd: Unit rbro-cbs-adapter-int.service entered failed state.","predecoder":{"program_name":"systemd","timestamp":"Jan 31 12:40:07","hostname":"MYAGENTSERVERNAME"},"decoder":{"name":"systemd"},"location":"/var/log/messages"}
But, I can also find alert if I change monitored file. (file> wazuhtest)
{"timestamp":"2022-01-31T12:45:59.874+0100","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629559.67086751","full_log":"File '/var/ossec/etc/wazuhtest' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '61' to '66'\nOld modification time was: '1643618571', now it is '1643629559'\nOld inode was: '786558', now it is '786559'\nOld md5sum was: '2dd5fe4d08e7c58dfdba76e55430ba57'\nNew md5sum is : 'd8b218e9ea8e2da8e8ade8498d06cba8'\nOld sha1sum was: 'ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3'\nNew sha1sum is : 'bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31'\nOld sha256sum was: '589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320'\nNew sha256sum is : '7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7'\n","syscheck":{"path":"/var/ossec/etc/wazuhtest","mode":"realtime","size_before":"61","size_after":"66","perm_after":"rw-r-----","uid_after":"0","gid_after":"0","md5_before":"2dd5fe4d08e7c58dfdba76e55430ba57","md5_after":"d8b218e9ea8e2da8e8ade8498d06cba8","sha1_before":"ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3","sha1_after":"bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31","sha256_before":"589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320","sha256_after":"7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7","uname_after":"root","gname_after":"root","mtime_before":"2022-01-31T09:42:51","mtime_after":"2022-01-31T12:45:59","inode_before":786558,"inode_after":786559,"diff":"1c1\n< dadadadadad\n---\n> dfsdfdadadadadad\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}
Also I can see this alert in messages logs on the manager server>
Jan 31 12:46:10 MYMANAGERSERVERNAME filebeat[186670]: 2022-01-31T12:46:10.379+0100#011WARN#011[elasticsearch]#011elasticsearch/client.go:405#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc07610e0563729bf, ext:10888984451164, loc:(*time.Location)(0x55958e3622a0)}, Meta:{"pipeline":"filebeat-7.14.0-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"dd9ff0c5-d5a9-4a0e-b1b3-0e9d7e8997ad","hostname":"MYMANAGERSERVERNAME","id":"03fb57ca-9940-4886-9e6e-a3b3e635cd35","name":"MYMANAGERSERVERNAME","type":"filebeat","version":"7.14.0"},"ecs":{"version":"1.10.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"MYMANAGERSERVERNAME"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":127261462},"message":"{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"xlcppt36","ip":"10.74.96.34"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::706-64776", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00095ea90), Source:"/var/ossec/logs/alerts/alerts.json", Offset:127262058, Timestamp:time.Time{wall:0xc076063e1f1b1286, ext:133605185, loc:(*time.Location)(0x55958e3622a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x2c2, Device:0xfd08}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"data_stream [<wazuh-alerts-4.x-{2022.01.31||/d{yyyy.MM.dd|UTC}}>] must not contain the following characters [ , ", *, \, <, |, ,, >, /, ?]"}
Here is output form apps check.
curl "http://localhost:9200"
{
"version" : {
"number" : "7.14.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "6bc13727ce758c0e943c3c21653b3da82f627f75",
"build_date" : "2021-09-15T10:18:09.722761972Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
filebeat test output
elasticsearch: http://127.0.0.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 127.0.0.1
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.14.2
So .. I can see alerts coming from Agent, but Its not reaching Kibana yet. On the kibana web I can see agent active and connected.

How to change the max size for file upload on AOLServer/CentOS 6?

We have a portal for our customers that allow them to start new projects directly on our platform. The problem is that we cannot upload documents bigger than 10MO.
Every time I try to upload a file bigger than 10Mo, I have a "The connection was reset" error. After some research it seems that I need to change the max size for uploads but I don't know where to do it.
I'm on CentOS 6.4/RedHat with AOL Server.
Language: TCL.
Anyone has an idea on how to do it?
EDIT
In the end I could solve the problem with the command ns_limits set default -maxupload 500000000.
In your config.tcl, add the following line to the nssock module section:
set max_file_upload_mb 25
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param maxinput [expr {$max_file_upload_mb * 1024 * 1024}]
# ...
It is also advised to constrain the upload times, by setting:
set max_file_upload_min 5
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param recvwait [expr {$max_file_upload_min * 60}]
If running on top of nsopenssl, you will have to set those configuration values (maxinput, recvwait) in a different section.
I see that you are running Project Open. As well as setting the maxinput value for AOLserver, as described by mrcalvin, you also need to set 2 parameters in the Site Map:
Attachments package: parameter "MaximumFileSize"
File Storage package: parameter "MaximumFileSize"
These should be set to values in bytes, but not larger than the maxinput value for AOLserver. See the Project Open documentation for more info.
In the case where you are running Project Open using a reverse proxy, check the documentation here for Pound and here for Nginx. Most likely you will need to set a larger file upload limit there too.

Script test seems to not have correct working directory

I am trying to see if I can get code examples to run in my user account, rather than the testuser account. To that end, I did the following:
I have created a folder example-fraud-score under my DeployR user account (not the testuser as laid out in the tutorial, found here: https://msdn.microsoft.com/en-us/microsoft-r/deployr-data-scientist-getting-started )
uploaded the contents of analytics/ from the tutorial to the example-fraud-score directory on the DeployR server.
Attempted to run the file ccFraudScore.R, contents here: https://github.com/Microsoft/js-example-fraud-score-basics/blob/master/analytics/ccFraudScore.R using the 'Test' tab on the right after clicking on the filename in DeployR.
When I do, I get the error:
Connecting to 172.31.232.190:8000
3:53:26 PM Stream Connect matthew.pettis connection established, waiting for an event...
> require(deployrUtils)
> deployrInput("{\"name\": \"bal\", \"render\":\"integer\", \"default\": 5000, \"min\" : 0, \"max\": 25000 }")
> deployrInput("{\"name\": \"trans\", \"render\":\"integer\", \"default\": 12, \"min\" : 0, \"max\": 100 }")
> deployrInput("{\"name\": \"credit\", \"render\":\"integer\", \"default\": 8, \"min\" : 0, \"max\": 75 }")
> if (!exists("fraudModel")) {
+ load("fraudModel.rData")
Console Error cannot open the connection
API Error cannot open the connection
I tried following the post here to troubleshoot, but I could not find where my directory was: deployR cannot open the connection
When I used the script to look for the working directory (and list contents), I see:
> require(deployrUtils)
> getwd()
[1] "C:/PROGRA~1/MICROS~2/DEPLOY~1.0/rserve/workdir/conn2209460"
> list.files(getwd())
[1] "DeployREngineSource.r" "unnamedplot001.png"
This seems like the wrong directory to be using. When I try to hunt around for my directory for my user, I can't seem to find it. My DeployR version is 8.0.0.
Help is appreciated.
Thanks,
Matt
Is that the remote directory (the second box)?
If so, why don't you call setwd()?

opensplice dds Hello Word Example

I am posting here after asking the question at the openslice dds forum, and not receiving any reply.I am trying to use opensplice dds on a ubuntu machine. I am not sure if it serves as a proof of proper installation, but I have pasted my release.com file below. Now, I was able to run the ping pong example just fine. But when I ran the executable sac_helloworld_pub ( HelloWorld example in the C programming language), I got the following error
vishal#expmach:~/HDE/x86.linux2.6/examples/dcps/HelloWorld/c/standalone$ ./sac_helloworld_pub
Error in DDS_DomainParticipantFactory_create_participant: Creation failed: invalid handle
I did some searching, and it looks like I need to be running the ospl start command from the terminal. But when I do so, I get a No command ospl found message. Below is the release.comfile's contents
echo "<<< OpenSplice HDE Release V6.3.130716OSS For x86.linux2.6, Date 2013-07-30 >>>"
if [ "${SPLICE_ORB:=}" = "" ]
then
SPLICE_ORB=DDS_OpenFusion_1_6_1
export SPLICE_ORB
fi
if [ "${SPLICE_JDK:=}" = "" ]
then
SPLICE_JDK=jdk
export SPLICE_JDK
fi
OSPL_HOME="/home/vishal/HDE/x86.linux2.6"
OSPL_TARGET=x86.linux2.6
PATH=$OSPL_HOME/bin:$PATH
LD_LIBRARY_PATH=$OSPL_HOME/lib${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH
CPATH=$OSPL_HOME/include:$OSPL_HOME/include/sys:${CPATH:=}
OSPL_URI=file://$OSPL_HOME/etc/config/ospl.xml
OSPL_TMPL_PATH=$OSPL_HOME/etc/idlpp
. $OSPL_HOME/etc/java/defs.$SPLICE_JDK
export OSPL_HOME OSPL_TARGET PATH LD_LIBRARY_PATH CPATH OSPL_TMPL_PATH OSPL_URI
$#
release.com (END)
Sorry for the holidays-driven lack of 'reactivity' on the OpenSplice forum .. I've answered your question there though ..
Here's that same answer for completeness:
*For the 6.3 community-edition, the deployment-model changed from shared-memory (v5.x) to the so-called single-process standalone deployment mode where the middleware is simply linked (as libraries) with the application so you don't need to start any daemons first (as was the case for the federated 'shared-memory' mode that was the default in V5).
So its OK that you get the error when trying to call 'ospl' as thats not used anymore so isn't in the distribution.
Now to your issue, your release.com looks OK to me, but perhaps you didn't actually 'source' it in your environment i.e. calling it with a '.' in front of it:
promtp> . release.com
you can verify that by doing an 'echo $OSPL_HOME' in your shell and see if it actually shows the value of the env. variable as set by the release.com.
Hope that helps,
-Hans*

Unexpected error while loading data

I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.