I am doing a school project where we need to create an android application which needs to connect to a database. the application needs to gain and store information for people's profiles on the database. But unfortunatly we are a little bit stuck at this point because there are numerous ways to link the application such as http request through apache or through the SOAP/REST protocol.
But it's really hard to find good instructions or tutorials on the problem since I can't really find them. Maybe that's cause i'm probably using the wrong words on google. Unfortunately I have little relevant information. So if anyone can help me with finding relevant links to good online tutorials or howto's than those are very welcome.
I'd recommend using REST and JSON to communicate to a PHP script running on Apache. Don't worry about the database on the Android side of things, just focus on what kinds of queries you might need to make and what data you need returned. Then put together a PHP script to take those queries and generate the necessary SQL to query the database on the server. For example, You need look look up a person by name and show their address in your Android app. A REST Query is just a simple HTTP GET to request data. For example, to look up John Smith, you might request: http://www.example.org/lookup.php?name=John+Smith which will return a short JSON snippet generated by PHP:
{
name: "John Smith",
address: "1234 N Elm St.",
city: "New York",
state: "New York"
}
You can instruct PHP to use the content type text/plain by putting this at the top of your PHP script:
Then you can just navigate to the above URL in your browser and see your JSON response printed out nicely as a page. There should be a good JSON parser written in Java out there you can use with Android. Hopefully, this will get you started.
This tutorial really helped me: http://www.screaming-penguin.com/node/7742
Related
Hello I've recently migrated a website from EE to Craft using this guide :
https://docs.craftcms.com/feed-me/v4/guides/migrating-from-expressionengine.html
However the migration did not cover blog posts comments, and the client wants the comments for each post migrated as well, could you please advise me how to do that ?
Here is the information from the JSON file (I replaced the text to protect my client privacy )
https://codebeautify.org/jsonviewer/y22cc64b3
As seen in the json there is no comments saved anywhere.
And here is the code from the guide that I used
https://gist.github.com/engram-design/5fbe54ef0abb15e3ff6f667291098464
Please let me know if there is any other way of doing that.
I use quicken, which can automatically download bank of America transactions. However, it truncates all the payees so I lose data. I'd like to work around this and I'm thinking of downloading the transaction data and generating my own QFX file with the full payee info.
Is there a way that I can download transactions programmatically, or download something like a .qif (available on their website) programmatically? For the latter, I could convert the gif to a QFX myself.
If anyone has other ideas to download all of the transaction information without losing the payee info, I would welcome those ideas as well.
Do they provide an api for this? but most probably not for 3p without a contract. since its bank , there must be check for browser etc along with standard sign in so it'll hard for curl. you can have a browser plugin to read all the data from the page and do auto scroll to get new transactions if not fitting in page. it's a hacky solution but good to get what we need as you told that data is available on the page and have to revisit with updates but changes in basic structure is rare.
A quick search for bank of america api yielded this BofA API. They even have many options for types of payment information you could query here as well as lots of individual account types that you can access it as.
It looks pretty comprehensive. If you don't see what you are looking for there I put another option below, just in case.
I don't use BofA. So I can't speak to what they have natively available. But you could always use a bot to scrape it if they present it anywhere in the User Interface.
I would agree with Meena that you should not be able to use curl. But selenium uses a browser to programmatically do just about anything that you would want to do with any website. They also have bindings for many languages. So you could just pick your favorite and go to town...
It seems the API will return a JSON so you may need to find a tool to convert that to a qif or qfx if that part is important. After digging further, I can't test this without having a CashPro account but it seems what you need to do is...
Step 1:
Get an access token from here. You'll need to send this in the header of any requests
Step 2:
Send an http request with a header in the following format:
{
"accounts": [
{
"accountNumber": "xxxxxxx",
"bankId": "xxxxxxx"
}
],
"fromDate": "yyyy-mm-dd",
"toDate": "yyyy-mm-dd"
}
to https://developer.bankofamerica.com/cashpro/reporting/v1/transaction-inquiries/previous-day
Step 3:
You should get a JSON as a response
As mentioned, I can't test this but here's the documentation of the specific API endpoint you need
Does anybody know where documentation for the Metacritic api is/if it still works. There used to be a Metacritic API at https://market.mashape.com/byroredux/metacritic-v2#get-user-details which disappeared today.
Otherwise I'm trying to scrape the site myself but keeping getting a blocked by a 429 Slow down. I got data like 3 times this hour and haven't been able to get anymore in the last 20 minutes which is making testing difficult and application possibly useless. Please let me know if there's anything else I can be doing to scape I don't know about.
I was using that API as well for an app I wrote a while ago. Looks like the creator removed it from Mashape. I just sent him an email to ask whether it'll be back up. I did find this scraper online. It only has a few endpoints but following the examples given you could easily add more. Let me know if you make any progress!
Edit: Looks like CBS requested it to be taken down. The ToS prohibits scraping:
[…] you agree not to do the following, or assist others to do the following:
Engage in unauthorized spidering, “scraping,” data mining or harvesting of Content, or use any other unauthorized automated means to gather data from or about the Services;
Though I was hoping for a Javascript way of doing this, the creator of the API also told me some info.
He says I was getting blocked for not having a User agent in the header and should use a 429 handling procedure i.e. re-request with longer pauses in between.
A PHP plugin available as well: http://datalinx.io/shop/metacritic-api/
I had to add a user agent like JCDJulian said and now it allows me to scrape. So for Ruby:
agent = Mechanize.new
agent.user_agent_alias = "Mac Firefox"
Then it stopped giving me the 403 Forbidden error.
I'm about to create an application that uses JSON to update its content.
This is how I planned it to work:
When application starts, it checks (with internet connection is available) if the JSON file set on remote server is newer than the one stored localy - if it is then it's downloaded.
Then, the application applies data from that JSON to the content. For example, to the "Contact" information - it applies data like phone numbers etc.
My question is, is it in your opinion, a good technique to update appliactions content?
Does anynone had an experience with building app with this kind of idea?
Best regards,
Zin
Of course you can do this. One thing that may lead to a better user experience would be to ask the user for his permission to download new content (if there is something new).
This is a normal thing to do. I have a phonebook app that does exactly this. On a side note, if you need a network class to handle the web-service interaction, see this SO post. I wrote a custom network class that works with AFNetworking.
Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page).
If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.
I'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.
Of course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)
You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.
Better bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like #jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)