I have been given an ETL project as a task which requires me to ingest some data gleaned from GA into Gooddata via an API and perform some ETL operations. Also, the creation of reports and dashboards are an integral part of this assignment.
It is my first time using this platform. If there's any way, method or procedure that you can recommend me for doing this, that will be great.
Thanks
As you are new to GoodData, the basic tutorial may help you understand basic concepts - https://help.gooddata.com/display/doc/GoodData+Developer+Tutorial
Specifically for data loads, there are several ways how to load data into GoodData platform.
You can look at https://help.gooddata.com/display/doc/Data+Loading as a starting point.
As you are specifically mentioning API, I recommend to this part of documentation - https://help.gooddata.com/display/doc/Loading+Data+via+REST+API
In case you are interested for component ready for direct communication with GA, you can use "Google Analytics Reader" component https://help.gooddata.com/cloudconnect/manual/gareader.html within CloudConnect Designer (https://help.gooddata.com/display/doc/CloudConnect+Designer)
If you intend to transform data prior loading into GoodData platform, CloudConnect can be utilised or (depending on your GoodData environment) Agile Datawarehousing Service (https://help.gooddata.com/pages/viewpage.action?pageId=34341138) tied to Automated Data Distribution (https://help.gooddata.com/display/doc/Automated+Data+Distribution) may be option for you.
Your question is quite general, but regarding GA and using CloudConnect I recommend you to check the example in GoodData documentation:
https://help.gooddata.com/display/doc/CloudConnect+Example+Projects#CloudConnectExampleProjects-GoogleAnalyticsExampleProject
Related
I am brand new to ADF and am creating my very first data factory. I am using the UI option (if anyone can point me to any documents for using code I'd be most grateful).
I will have 3 different environments - dev/test/prod. Each of these have got slightly different configs (yes I know!). So my datasets and linked services will need to change for each environment. What is the best way to do this? How would you approach this?
(p.s: We also have BitBucket and Jenkins/Octopus for CI/CD, so ideally would like to create scripts to automate this if possible.)
Thank you
You can create data factory using code. You can find code with detailed information here
There are 2 approach to deploy ADF pipeline.
ARM template
Custom approach (Json files, via REST API) - With this approach, we can fully automate CI/CD process as collaboration branch will be our source for deployment. This is the reason why the approach is also known as (direct) deployment from code (JSON files).
Refer this blog by Kamil Nowinski
Scope of the question is broad. But, this video by Mohamed Radwan practically shows how you can deploy and manage 3 different environments i.e. ADF-DEV, ADF-PROD and ADF-UAT.
I have to deposit a report in .txt format once a day and upload it to an SFTP. I have generated the report in BigQuery but can't find a way to export it as .txt. Is it possible?
There are quite a number of ways to accomplish this and almost all involve some extend of coding with clients of your choice or great GCP tools like Dataflow, etc. They all require skilled engineers at hand
For sure, there will be few answers covering those options
Meantime, I want to provide different option.
There are some third party tools that helps to achieve same w/o no extra coding (rather than BigQuery querying)
Below is example of how simple it is to do with Magnus which is part of Potens.io suite of powerful and efficient tools for BigQuery designed so that even the non-engineer can easily explore and automate workflows to become self-sufficient in their data needs like in your question.
Disclosure: Google Developer Expert in Cloud here - author of BigQuery Mate and Potens.io (Magnus and Goliath) productivity tools
So, in below screenshot you see workflow with just two Tasks.
First Task defines payload of your report and Second Task uploads it to client's SFTP
Below you can see flip side of second task with more settings - zero coding!
In this particular example - you do not even need to persist your report in BQ Table - Second Task will just pick it from the first Task (even though obviously in real life you most likely to preserve report - which is still easy to set in first Task using Destination Entry)
I recommend you to try
I want to build a project on "Live Vehicle tracking system" using J2ee.following are my basic ideas-
a website from which end user can track the vehicle(tracking can be done on Google maps).
a GPS system embedded in the vehicle so that it can send location to the server.
i think of using J2ee.please suggest me whether to use this or any other language.
This is basic idea.please make correction if necessary.
Thank you
We are making sort of application using Spring ZK framework Maven and Mysql. Also we have our device for vehicle tracking. We already have some application made for tests. I can guarantee that J2ee is appropriate solution for it. We didn't have to made for example almost non of sql queries - all of them is easy to generate. We have experience in that field for 7 years and with different tools so yes that's good solution.
I was wanting to use a file sharing server to keep certain files up-to-date and constant across multiple instances of my application across multiple computers - like (for example) writing a multiplayer game, which stores all the player's positions in a text file, and uses something like Dropbox to keep the text file constant across all the applications, and each application instance can change the file with that application's player's position, and then the rest of the applications can update accordingly. This is only an example, and is not what I intend to do using this technology. What I want to do does not rely on fast sharing of data very quickly - but only periodically downloading and updating the text file.
I was wondering how I might be able to do this using the Dropbox API for Objective-C without prompting the user for any Dropbox username/password - just store a single Dropbox account's login information, log into it automatically and update/download the file stored on it?
From what I have found out from experimenting, Dropbox prompts users for their passwords via a web-broswer, and is designed to accommodate multiple accounts, whereas I only need to accommodate the 'Server' account.
So, is there anyway to do this sort of thing using the Dropbox API, or should I use something else. Or do I need to find out how to write my own server. Using some sort of file sharing API seems a lot easier to me than writing an actual server.
Thanks for any help,
Ben
You might think about using Google App Engine (GAE). I had a similar requirement recently and I'm thinking this is a good option when you want centralized data. Plus you can do the no-browser account login by using your own custom authentication, or I think it's even possible via OAuth? Depends on how sensitive the data is I guess. I just rolled my own.
From my research I found that using Dropbox as a server has some issues with scalability, since you'll be limited to maybe 5,000 calls per day. source It's built on Amazon S3, so you could also look at using that directly.
GAE lifts that limit up to 675,000, but can be increased up to 91 million for free.
https://developers.google.com/appengine/docs/quotas
I did find an open-source project for doing this with Java, alternative you could look at Python example
I've written a daemon that continuously checks for updated files and syncs them. I wrote it for my own file manager iOS app. You can find the implementation here:
https://github.com/H2CO3/MyFile/tree/master/DropboxDaemon
I'm personally not an iOS developer but I came across this question while looking for something else and thought I would offer up another potential solution to the OP's question.
Microsoft just released something called Azure Mobile Services which supports iOS development (among other platforms). It's basically a convenient way to set up a back end system complete with push notifications, authentication, etc. without rolling your own. You don't need to know anything about Azure or servers as the setup process walks you through most of it. It is new so keep that in mind, but it looks promising for situations like this.
Here's a 10 minute video explaining how to use it with an iOS developed app along with links to more documentation:
http://channel9.msdn.com/posts/iOS-Support-in-Windows-Azure-Mobile-Services/
Hope this helps.
An recent article has prompted me to pick up a project I have been working on for a while. I want to create a web service front end for a number of sites to allow automated completion of forms and data retrieval from the results, and other areas of the site. I have acheived a degree of success using Selenium and custom code however I am looking to extend this to a stage where adding additional sites is a trivial task (maybe one which doesn't require a developer even).
The Kapow web data server looks to achieve a lot of this however I am told it is quite expensive (currently awaiting a quote). Has anyone had experience with this, or can suggest any alternatives (Open Source ideally)?
Disclaimer: I realise the potential legality issues around automating data retrieval from 3rd party websites - this tool is designed to be used in a price comparison system and all of the websites integrated with it will be done with the express permission of the owners. Where the sites provide an API this will clearly be the favoured approach.
Thanks
Realised it's been a while since I posted this, however should anyone come across it, I have had lots of success in using the WSO2 framework (particularly the mashup server) for this. For data mining tasks I have also used a Java library that this wraps - webharvest - which has achieved everything I needed