There's a new feature in hive called LLAP. During the investigation I've found out that it's quite difficult to configure LLAP so there's a component called Apache Slider that will configure it. Still I couldn't find any documentation for manual configuration without Slider. https://cwiki.apache.org/confluence/display/Hive/LLAP
Take a look at this documentation.
https://hortonworks.com/hadoop-tutorial/interactive-sql-hadoop-hive-llap/
[Updated] it seems like the above page was removed by Hortonworks
The only option I can suggested now is
https://www.google.com/search?q=hadoop+interactive+sql+hadoop+hive+llap&oq=hadoop+interactive+sql+hadoop+hive+llap&gs_l=serp.3..35i39k1.5338.10878.0.11135.4.4.0.0.0.0.199.655.0j4.4.0....0...1c.1.64.serp..0.3.499.N2KWHY3UFi8
Related
I would like to connect to MariaDB database via SSL in Quarkus application. However, I cannot find a way how to define the SSL-related information in Quarkus application.
How to provide the certificate which is needed in database connection in Quarkus application?
Is it even possible?
If not, I assume that many would be interested in that feature.
I searched information from https://quarkus.io/guides/datasource but did not find anything regarding this.
MariaDB reference: https://mariadb.com/kb/en/library/using-tls-ssl-with-mariadb-java-connector/
There's no reason for it to not work. Just include what you need in your JDBC url.
Be aware though that if you are using native images, you should read this guide that will guide you through configuring everything properly: https://quarkus.io/guides/native-and-ssl .
I am trying to get a simple apache stack running and came across something I have not seen before. This is an AWS instance running the Bitnami LAMP stack. If I create an incomplete html file as:
<h1>Something Here</h1>
Apache is prepending to the response. E.g.
<head/><h1>Something Here</h1>
I am serving an angular2 app from this stack and loading of the component templates is failing since they are being seen as malformed. Does anyone know what apache setting or module might be doing this?
Thanks
PageSpeed is the one that's adding that <head/>. PageSpeed is enabled by default on the Bitnami LAMP Stack.
This is added by the default mod_pagespeed add_head filter. You can disable it adding the line below to /opt/bitnami/apache2/conf/pagespeed.conf:
ModPagespeedDisableFilters add_head
However, note that this filter is needed for many other filters which will only write contents in the element.
You can also disable PageSpeed as explained in the guide below to check that the header disappear:
https://docs.bitnami.com/aws/infrastructure/lamp/#how-to-disable-the-cache-in-the-server
we try to upgrade from archvia 1.3.6 to 2.1.1 but suddenly the remote repositories (including proxy connectors) stopped working. The remote repository view shows error marks in the column "Remote check" but no error message is shown.
Is there a possibility to find out what is going on?
We are using a proxy, we tried with proxy activated, deactivated. I even installed archiva locally on my machine with a fresh database, but still no success.
(how does this remote check even work when the proxy is activated/deactivated in the proxy connectors?)
Eclipse (with newest m2e) says Missing artifact junit:junit:jar:3.8.9. It goes so fast, that i don't think archiva is trying to reach the central-Repository.
The logs on archiva-side are empty.
Does anybody have some hints or the same problem? I think i will try it at home tonight, to see if it is a network issue.
Thanks in advance for any tips!
Update
It really seems that the proxy connector does not work since the internal Repository is empty. http://localhost:8080/archiva/repository/internal/ only shows .indexer
Update 2
The proxy configuration seems bugged in Archiva 2.1.1. I can see the same behaviour as here: Mailing List
A JIRA task for this would be nice.
Does anybody know a workaround to set the proxy for a proxy connector? Or is there a possibility to set a global proxy via a settings file?
Update 3
Rellay seems like a bug in archiva. I sent a mail to the mailing lists. Hopefully this is getting fixed soon because this is a blocker for every user with a proxy.
I won't delete this question for documentation if someone has the same problem. The issue can be found in JIRA here
I also had this problem and the simple solution was to change the proxy protocol from "http" to "https".
I also had the same problem. On first glance the solution given by Christian Quast seemed to work, but it didn't solve the problem. I eventually used a work around by using JVM proxy settings:
-Dhttp.proxyHost=[your_proxy_address]
-Dhttp.proxyPort=[your_proxy_port]
-Dhttp.proxyUser=[your_proxy_user_name]
-Dhttp.proxyPassword=[your_proxy_user_password]
-Dhttp.nonProxyHosts=localhost|127.0.0.1|::0|[any_other_hosts_not_to_use_proxy]
Update
I know it may sound weird but, using the settings above, the error/warning icon on "Remote Check" may still appear. If you add the "network proxy" (mine is using https protocol) to your remote repository (the error/warning icon is still there) but editing the remote repository again and removing it's "network proxy" will show the OK/sun icon.
In my case <networkProxy> under conf\settings.xml gets updated correctly including the port information (probably because my port is not a default 8080) but remote repository connection is still failing.
Also, changing proxy protocol to https did not help.
I know the proxy is right because I use the same for maven .m2\settings.xml
Fortunately I am only evaluating open source repo management tools. Started with Archiva as it is by Apache and we use Maven in our project. Would have moved ahead if this critical issue had a fix or work around. Guess I will have to take a shot at Nexus.
Exactly same problem here. I can't vote on your BUG report because I have no jira account.
As far as I figured out there seems to be a problem with the configuration file ~/.m2/archiva.xml. The Proxy is set without port information.
Hopefully this bug will be fixed as soon as possible.
Extending João Ferreira's reply, to access repositories with https URLs (such as Maven Central), you will also need:
-Dhttps.proxyHost=[your_proxy_host]
-Dhttps.proxyPort=[your_proxy_port]
I am on Mac OSX Lion using Nginx 1.4.1. I am using nginx in conjunction with Tornado.
In the process of installing the Nginx upload module (v. 2.2.0) I encountered some compatibility issues. See this reference for more info. Apparently, there is no great fix for this as of yet. My specific error is rooted in: error: no member
named 'to_write' in 'ngx_http_request_body_t'
Is there a way to make the two of these reliably compatible without jumping through hoops?
Or, is there a suitable alternative to using this upload module that will work with Nginx 1.4.1?
If not, should I considering using Nginx 1.3.8? And if so, where can I download this version? I do not see it available for download on their website here.
Thank you for the help. Regards.
1) No, it doesn't seem like there is as the maintainer of nginx-file-upload has implied he doesn't want to maintain it any more.
2) I found this article which lists some alternatives. One of which is nginx-big-upload I've not tried it yet.
3) Well you could consider it but then you're tied in to a package that isn't maintained. What happens if there's a security vulnerability for 1.3.8? You can't upgrade without either patching or changing your file upload strategy. If you want to, you can find all of the older Nginx versions here
The situation is pretty frustrating at the moment but there are options, just none of them are tried and true. When dealing with production systems stability and security are key.
1) Yes, this module dose not support for nginx 1.4+.
2) The reason is that nginx support chunked of thansfer-encode, and improve its code design. that it remove the field to_write of ngx_http_request_body_t struct.
3) https://github.com/hongzhidao/nginx-upload-module. This is an alter module. It support the latest nginx, and the feature is equal.
Is there any way of POSTing a new schema to Solr (eg. is there a handler for managing schema updates) instead of manually placing the new schema.xml in Solr home directory?
Unfortunately that's an open issue as of this writing, and there doesn't seem to be much interest to implement it.
As suggested in the comments you can work around this by setting up some external connection like WebDAV, FTP, SFTP, SCP.