I am doing a POC to connect to Apache Hive in the Apache Beam pipeline and i am getting exception similar to the below SO link. I did change the version of the JDBC driver to the latest. But still facing the issue.
As mentioned in the below link it was due to cluster issue. If anyone can help me out about the issue clearly i can guide my corresponding team with that issue and get it resolved.
If you need any other information,i will provide you with that.
Apache Beam - org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (Method not supported)
Related
I am using Pivotal GemFire 9.1.1 through Pivotal Cloud Cache 1.3.1 and ran into the following error while using the #EnableClusterConfiguration SDG annotation:
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on ac62ca98-0ec5-4a30-606b-1cc9(:8:loner):47710:a6159523:: The function is not registered for function id CreateRegionFunction
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:184)
Finally, I ran into this post - https://github.com/spring-projects/spring-boot-data-geode/issues/15
Is there any other annotation I can use with Spring Boot 2+ which will help me with GemFire Region creation, dynamically?
Thanks!
Unfortunately, NO; there is no other way to currently and "dynamically" push cluster/server-side configuration from a Spring/GemFire cache client to a cluster of PCC servers running in PCF, using SDG/SBDG at the moment.
This is now because of this underlying issue as well, SBDG Issue #16 - "HTTP client does not authenticate when pushing cluster config from client to server using #EnableClusterConfiguration with PCC 1.5."
For the time being, you must manually create Regions (and Indexes) using the documentation provided by PCC.
I am sorry for any inconvenience or trouble this has caused you. This will be resolved very soon.
This does work in a local, non-managed context, even when starting your cluster (servers) using Gfsh. It just does not work in PCF using PCC, yet.
Regards.
Question: I thought Weblogic and GlassFish / Payara are completely different servers and do not share any common code/component. How come I reached a Weblogic CVE when using Payara?
Configuration: Both our development and production systems are under Payara:
Payara 4.1.1.171.1 Full edition
Oracle Java 1.8.0_144
CentOS 7
Symptoms:
we have illegal connection to url /wls-wsat/CoordinatorPortType11 and wls-wsat/ParticipantPortType, under Anonymous authentication despite having Apache Shiro as security system.
we have an unknown Pyhton program running in our Production. Nothing found in Development so far
Payara Development has shutdown once and one deployment failed ending having Payara stopped (start-domain was required). Payara Production has shutdown once. All of it for unknown reason, especially there were at most one or two users doing nothing special at the shutdown moments
What I can (not) do:
After seeing this and reading this, I think the problems is solved for WebLogic systems but I don't know the mapping GlassFish version <-> Weblogic version, if it exists
Unless I missed a big stuff, I haven't found anything related CVE-2017-10271 and Payara.
We are planning to upgrade to Payara 4.1.2.174 shortly but I have no guarantee it will fix this issue.
I'm trying to check how Shiro can block such connection
I'm asking this question to make sure that there is (or not) no relationship between WebLogic and GlassFish/Payara before opening an issue on Payara GitHub. I unsuccessfully tried to run the python script, I don't know Python :(
I am evaluating the DataStax OpsCenter on a virtual machine to start managing/monitoring cassandra. I am following the online docs to create cluster topology models via OpsCenter LCM, but the error message doesn't provide much information for me to continue. The jobs status are,
error- MeldError, 400 Client Error: Bad Request for url: http://[ip_address]:8888/api/v1/lcm/internal/nodes/6185c776-9034-45b4-a54f-6eb9511274a2/package_information
Meld failed on name="testnode1" ssh-management-address=[ip_address]" node-id="6185c776-9034-45b4-a54f-6eb9511274a2" node-name="testnode1" job-id="1b792c69-bcca-489f-ad12-a6285ba84d59" stdout=" Meld has started... " stderr=""
My question is what might be wrong and any hint how to resolve that?
I am new to the cassandra and DataStax communities, please forgive me if any silly question asked!
Q: I used to be a buildbot user and DataStax agent looks like a Buildbot's slave. Why we don't need agent setup on the remote machine to work with opscenter? The working directory of agent is configured in opscenter?
The opscenterd.log, https://pastebin.com/TJsvmr6t
According to the compatibility of the tools set mentioned in https://docs.datastax.com/en/landing_page/doc/landing_page/compatibility.html#compatibilityDocument__opsc-compatibility , I actually use the OpsCenter v5.2 for monitoring and basic db operations. After trial-and-error of .yaml of Agent and .conf of Cassandra 2.2, the Dashboard works!
Knowledge gained,
The OpsCenter 5.2 actually works with Cassandra 2.2 which is not listed in the compatibility table
For beginner, if not sure where to start, try to install all the components on one machine to get idea of the least viable working setup. And from there to configure the actual dev/test/production environment.
My understanding on aws xray is, xray is similar to dynatrace and I am trying to use xray for monitoring apache performance. I do not see any document related to xray with apache except below.
https://mvnrepository.com/artifact/com.amazonaws/aws-xray-recorder-sdk-apache-http
Can anyone please suggest if it is possible to use aws xray with apache and if yes can you also point some document related to it. Thanks.
I assume that by "apache" you mean the Apache Tomcat servlet container, since you are referring to a maven artifact which is a Java build tool.
Disclamer: I don't know what "dynatrace" is and I don't know which logging you specifically want.
But as far as the Apache Tomcat servlet container and X-Ray goes - here is the link to get started:
http://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java.html
Start by adding AWSXRayServletFilter as a servlet filter to trace incoming requests. A servlet filter creates a segment While the segment is open you can use the SDK client's methods to add information to the segment and create subsegments to trace downstream calls. The SDK also automatically records exceptions that your application throws while the segment is open.
As for the mentioned maven artifact:
aws-xray-recorder-sdk-apache-http – Instruments outbound HTTP calls made with Apache HTTP clients
So, you'll need this if, let's say, a client makes a request to your Tomcat server and your Tomcat server makes a request to another server thus acting as a client in this case.
As I'm looking for tuning my hadoop map-reduce jobs to get better performance with optimal resource utilization, but I'm unable to start Can any one tell me how to configure Apache Hadoop Vaidya. I was following apache blog for Hadoop Vaidya, it has described very well how to use it.
In some blog i found a path
$HADOOP_HOME/contrib/vaidya/bin/
which is not present in my machine so I'm assuming that i have to install/configure Apache Hadoop Vaidya.
Any help will be appreciated!!