Using Pentaho Data Integration 9 community edition trying to connect to mongodb atlas but without success.
Tried the url mongodb provides:
mongodb+srv://<username>:<password>#something.XYZ.mongodb.net/<dbname>?retryWrites=true&w=majority
Which gives me the following error:
org.pentaho.mongo.MongoDbException: Malformed host spec: mongodb+srv://<username>:<password>#something.XYZ.mongodb.net/<dbname>?retryWrites=true&w=majority
I saw a tips to change to old connection string, something similar to the following:
mongodb://user:password#cluster0-shard-00-00-wuhae.mongodb.net:27017,cluster0-shard-00-01-wuhae.mongodb.net:27017,cluster0-shard-00-02-wuhae.mongodb.net:27017/shop?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin&retryWrites=true
but also without success.
Any ideas?
Need to specify the replicaset instead since it doesnt seem to support the mongodb+srv syntax.
So in my case I had to add the following:
test-shard-00-01.XYZ.mongodb.net,test-shard-00-00.XYZ.mongodb.net,test-shard-00-02.XYZ.mongodb.net
Related
I am trying to migrate data from Bigquery to Redshift using this article. I followed through and successfully got till "Start the Local Data Migration Task".I had to setup AWS profile to access "Data Migration View(Other)". AWS profile was setup using access key and access secret of an admin user account in AWS.
What am I missing ?However, upon starting the task I keep getting following error:
class com.amazon.dmt.model.FileCredentials cannot be cast to class com.amazon.dmt.model.UserCredentials (com.amazon.dmt.model.FileCredentials and com.amazon.dmt.model.UserCredentials are in unnamed module of loader 'app')
I tried to check AWS documentation and looked around but this error is not listed anywhere. I cannot seem to understand that, why is type casting from FileCredentials to UserCredentials is being done ?
Anyone faced a similar issue or can point me in right direction please ?
Based on my testing, I have determined that this is an issue in the 1.0.670 version of SCT. A request has been submitted to correct the issue. In the meantime, to allow you to continue with your project, please revert to AWS-SCT version 1.0.666 using this link. https://d211wdu1froga6.cloudfront.net/builds/1.0/666/Windows/aws-schema-conversion-tool-1.0.zip
You will have to uninstall SCT and the extractor agent then reinstall and configure the previous version(s) as you did before.
I have Apache Superset installed via Docker on my local machine. I have a separate production 20 Node Spark cluster with Hive as the Meta-Store. I want my SuperSet to be able to connect to Hive and run queries via Spark-SQL.
For connecting to Hive, I tried the following
**Add Database --> SQLAlchemy URI ***
hive://hive#<hostname>:10000/default
but it is giving some error when I test connection. I believe I have to do some tunneling, but I am not sure how.
I have the Hive thrift server as well.
Please let me know how to proceed.
What is the error you are receiving? Although the docs do not mention this, the best way to provide the connection URL is in the following format :
hive://<url>/default?auth=NONE ( when there is no security )
hive://<url>/default?auth=KERBEROS
hive://<url>/default?auth=LDAP
first you should connect the 2 containers together.
lets say you have the container_superset that runs superset and container_spark running spark.
run : docker network ls # display containers and its network
select the name of the superset network (should be something like superset_default )
run : docker run --network="superset_default" --name=NameTheConatinerHere --publish port1:port2 imageName
---> port1:port2 is the port mapping and imageName is the image of spak
I've downloaded Apache Apex 3.5.0 along with Malhar 3.5.0.
I've successfully started the apex client and submitted the Yahoo Finance demo example to our YARN cluster (running CDH 5.10). The cluster is running and configured properly (many Spark and MR jobs are running on it).
I see the application I submitted as RUNNING in YARN as well as in the Apex cli. However when I try to connect to the Application Master I get a 404.
org.apache.hadoop.yarn.webapp.WebAppException: /: controller for default not found
I also tried directly to connect to the appMasterTrackingUrl reported by get-app-info command, and I get the same error.
I tried a couple of apex examples, and I always get the same error.
Any idea why?
It is somewhat expected. Add "/ws/v2/stram/info" to the URL path
When you connect to the App Master you need to provide the complete URL for a REST API to invoke. There is nothing to show/return for "/" so what you are seeing is expected. What are you trying to do connecting to the App Master?
I have a webapp that I'm trying to use Mondrian within. And I'm getting the following exception when I try to open a connection:
Caused by: mondrian.olap.MondrianException: Mondrian Error:Internal error: Virtual file is not readable: /WEB-INF/olap/mycube.xml
I have tested this cube using a plain J2SE program from the command line, and it works fine. However, when I tried to execute the same cube in my web application I get the error above. My connect string is the following:
jdbc:mondrian:Jdbc=jdbc:mysql://${server.db.host}/HRWarehouse?user=${server.db.username}&password=${server.db.password};Catalog=/WEB-INF/olap/mycube.xml;
This is very similar to what I've found in the Mondrian web application. However, somehow that application has installed the ServletContext in the VFS, but there is exactly zero documentation out there that I can find through google about any sort of special configuration for mondrian in a web application.
I have worked around the issue by setting the path to the schema to be absolute reference instead of relative to webapp context. While this has allowed me to continue testing it is not a suitable solution to the problem. I'm looking for a answer on how to that fix the exception that allows a webapp context relative URL.
I think you need to specify the work file in the Catalog,
`jdbc:mondrian:Jdbc=jdbc:mysql://${server.db.host}/HRWarehouse?user=${server.db.username}&password=${server.db.password};Catalog=file:/path/to/schema.xml;`
I cant recall if it was absolute path or not, try both again.
I would also double check the connection string just to make sure it's written properly. Also, here is a link that might end up helpful if you don't have it already.
Update
Try to add jndi:/ at the beggining of your path:
jndi:/localhost/path/to/file.xml
I have tried for days now trying to find the right version of red5phone, but to no avail. Also, I need red5 and asterisk to be on different servers. I have followed all instructions described in the various tutorials on web, but nothing helps. I have downloaded 2 different versions of red5phone: sip1, sip_47 from the red5phone google code site, but none of them worked!
When I use:
a) sip1 - shows the correct parameters being passed on the red5 server console, but connection is stuck up with the console displaying the following error:
[NioProcessor-1] ERROR o.r.server.service.ServiceInvoker - Method login with parameters [<sip user>, <sip user>,<sip user>,<sip user pwd>, <asterisk server i/p>, <asterisk server i/p>] not found in org.red5.server.webapp.sip.Application#2d0c94a7
b) sip47 - when I type in the values in the flex interface and check the red5 server console, I see all parameters correct(the ones I passed) except for asterisk server ip that i m trying to connect. Instead, it shows 127.0.0.1 by default and completely ignores the passed ip and regsitration fails.
I am using:
red5 server version: 0.9.1, centos: 4.8(final), red5phone used: sip_47 (tried sip1 as well)
As a desperate measure I tried debugging the source code for red5phone (java and flex files) myself. but when I try to create the environment on my local system, I get several compile errors for missing java packages such as javax.media, org.slf4j, org.red5, etc. really confused and desperate for some guidance. Any tips highly appreciated.
Sunil, I'm also new to this, I would try red5-voicebridge installed in your red5 server.
red5-voicebridge Please let me know if you got it to work.