https://www.google.com/reader/api/0/stream/contents/feed/FEEDHERE?output=json&n=20
I'm using this right now to parse RSS and Atom feeds, for a lot of reasons. But there is no official API key or something so I'm afraid something may break in the future, like Google stopping my access if I make a lot of queries.
Is there an alternative to this with API keys?
I suggest trying out Simplepie, I wrote a nice RSS reader with it once. Pretty easy to use.
There are multiple alternatives.
There's Feedly. It’s not the minimalistic, omnipresent glory that is Google Reader, but it’s close, and in some ways exceeds Reader’s capabilities. They've got even a guide for transitioning.
There's NewsBlur. It's developed "in the open", aka you can just check it out on their GitHub.
There's Netvibes. However. Unlike Feedly there's no simple import route in Netvibes for integrating your Google Reader subscriptions. Instead, you need to export your Reader subscriptions as a .ZIP file, extract them to find the 'Subscriptions.xml' file and then import that into Netvibes.
There's also another question on the same subject on Stack Overflow.
And there's many more, just do a Google search.
In my (and many others') opinion, Feedly's the best alternative. But each their own. Good luck!
Try Google Feed Api, it is easy to use. It supports JSON and XML formats, but it is limited to a maximum of 250 feeds per url.
Related
I am in the process of using VBA in SolidWorks' to integrate Autodesk's PLM 360 into SolidWorks. The problem is that I have very little experience interacting with REST APIs, and it seems like uploading files to a cloud application is probably the most complicated part of that. In any case, it is very important for me to be able to do two things with VBA from SolidWorks: (1) add images to an item's description (documentation), and (2) upload and attach files to an associated item (documentation).
In any case, the PLM 360 documentation explains things fairly clearly, and this blog post (specifically, the approach using WinHttp.WinHttpRequest.5.1) seems to do a decent job of explaining the VBA side, but I'm struggling to figure out which things are important from each of the given examples.
Here are the main things that are puzzling me at the moment:
The PLM 360 documentation seems to indicate that one request is being sent in two parts, but I don't understand how to do that in VBA.
The code in the blog post surrounds the binary file with STR_BOUNDARY. Is that necessary in all cases, or is that something that's necessary only for some APIs?
The code in the blog post includes the Content-Disposition and a few other things in the binary portion. Again, is this necessary in all cases, or is that something that's necessary only for some APIs?
The caveat to the pvToByteArray function is confusing to me. I understand that it is converting back to a byte array the string containing the file data, but if #3 above is not necessary in PLM 360, is it necessary to convert the file to a string and then back to a binary "blob", or does that result in it being possible to pass baBuffer without failure?
How do I include the rest of the information in the request for creating the item? I suppose this is somewhat related to #1 above, but I don't understand how to send API calls in two parts.
Ultimately, I'm posting this question here because I don't think people on the SolidWorks forum would have enough experience with REST APIs to be able to help, and experience shows that the people in the PLM 360 forum don't have enough experience with VBA to be able to help. StackOverflow seems like the place with the best chance of having people with experience in both areas.
APIs are getting more and more popular and are used by developers to ease the process of developing applications to multiple platforms AND allow them to give other developers the ability to integrate their application's functionality into their own applications.
I've used APIs countless times before, but I'm now at the stage of developing my own applications. And as a developer who strives to create multi-platform applications - I need to use an API.
I'm going to use the RESTful approach as it's recommended the most.
After reading and looking for some background information, I came across: REST API Tutorial (which is really good site!), I learned that APIs basically receive HTTP requests, and return data in JSON/XML format.
However, there were 2 questions left unanswered to me:
In what form do APIs come in? Are APIs actually files? a set of commands......?
How do I actually write APIs? I'm talking about the server-side, data-handling code, and not the application/language-specific code (for sending out HTTP requests etc...)
It'd be great if someone could help me and answer the questions above as I have zero experience with APIs.
Any help is appreciated - much thanks!!
Just a quick from-the-gut answer: They are whatever you want them to be!
Off the top of my head, I would define an API as requiring two main elements:
Some documentation which makes it quite clear how to use the logic your systems prvides
Some way to call those systems. That may be as simple as a web-site that accepts POST-messages, and checks them for certain variables and values in order to perform specific tasks.
In short, it should be entirely up to you. Just make sure you provide simple, clear and acurate documentation.
UPDATE, as an asnwer to the comment below:
That is how I interpret it, and it would seem that Wikipedia is more or less in agreement with me. PHP would be a perfect example: You could for instance create a PHP-file which processes a POST, and instead of outputting html, outputs XML with the resulting data needed. Then a third party app could POST to your PHP application, and receive and process the resulting XML.
Apis come as a response to a http request. It is a plain text response that u can use encoded via json or xml as you described.
There are a plenty of frameworks to help you develop and API.
In Ruby u can use grape or rais-api or even rails itself.
There is a lot more available, but this are the ones im most used to use.
If a website is to provide non-free content in terms of videos and documents, how is one about to serve this kind of content? I don't want to write an occasionally connected desktop application (similar to iTunes) that prevents to easily share bought content so I have a few questions about this:
Documents: What's best document format for this kind of scenario that would prevent one to share it freely after it was bought?
If I think of PDF it's a great format so people can't temper with its content and they will actually see what you created, but the problem is that after one obtains it, it can easily get shared with others. Either having password or not.
Videos: If supporting video it's probably wise to use some public (may be payable as well) service that can handle video streaming like YouTube (which is AFAIK not able to have a non-public videos).
I'm well aware that there is no 100% perfect solution that prevents it all, but having a 90% successfully locked down content is still better than hoping people won't share something that can easily be shared.
Can you list a few websites that do a similar thing. I may learn a lot from them. And please provide some guidelines I should follow in this regard.
Is there maybe a similar to PDF document format that has built in security capabilities? Commercial even... It should support some kind of authorization functionality within that would work similarly to any software activation.
Note: I wasn't sure whether this should be posted on webapps.stackexchange.com or here but I decided it should be posted here because it's related to development. I'm generally interested in programmable approaches I could use.
Our company has thousands of PDF documents. How do we create a simple search engine using Lucene, Solr or Nutch? We'll provide a basic Java/JSP web page were people can type in words and perform basic and/or queries then show them the document links of all matching PDF's.
I have had good luck with lucene, but it is not click, install and search, it does require a bit of work.
If you need something that yo can download and install and be searching within 10 minutes, look at the free Ominifind Yahoo Edition http://omnifind.ibm.yahoo.net/, it uses Lucene, but is packaged such that it is configured and ready to run upon install, a much easier way to try Lucene.
Nutch + Lucene + Pdf plugin enabled in Nutch is your solution. Nutch allows you to parse pdfs by enabling the pdf plugin.
Lucene will allow you to index the crawled and parsed data and Nutch has servelet which gives you a search interface.
We use the same for our internal lans.
None of the projects in the Lucene family can natively process PDFs, but there are utilities you can drop in and well written examples on how to roll your own.
Lucene will do pretty much whatever you need it to do, but there is overhead in terms of your time, as Tony said above. Thousands of documents really isn't that many, so you might be able to get away with a lighter weight alternative.
That said, I would still recommend looking at Solr - it's much, much easier to set up than Lucene, has support for backups, replication, etc., as well as a nifty JSON interface which would fit your use case very well: http://wiki.apache.org/solr/SolJSON
Google Search Appliance http://www.google.com/enterprise/gsa/
I think you want a system to manage your PDF file. Please try to use dspace system. Dspace is a digital library, it supports Lucene based on. www.dspace.org.
Take a look at eprints. It includes a workflow for adding new documents, automatically indexes and thumbnails PDF's and has fairly comprehensive full text search functionality. It can also be easily customised and branded.
Why re-invent the wheel. Again.
Answering such a broad question in this forum will be tough. I'd recommend you check out the book Lucene in Action, which covers the basics of indexing and searching in a quite readable fashion.
Given your application, it sounds like Nutch and Solr probably will not be necessary. Since all of your documents are available locally, Nutch probably won't be helpful. Solr may help you manage a cluster of searchers if you have a high query load, but Lucene is highly performant, and handles large document sets in a very scalable manner.
The one area that might consume a lot of your effort is the use of PDF. It's possible to index PDF documents, and there are Lucene contributions to facilitate the extraction of raw text from PDFs, but depending on the document, the quality of results can vary. Often, the context of a keyword in a PDF document is unclear because of formatting instructions, and that can make it hard to do proximity searches or show the context of a hit.
A great free search technology you might look at is the IBM Yahoo! free search. I'm not sure whether they followed through on plans to use Lucene under the covers, but it remains one of the really great, east to use free search technologies. It handles up to 500K documents, I believe, and it supports PDF and other non-text formats as well. Graphic user interface; easy to customize search results, and basic search analytics. Basic thesaurus, and powerful API so you can do pretty much whatever you want if the out of the box results are not to your liking. We've suggested this to a number of clients where there were fewer than half a million documents, and they love it.
If you've a Linux server, you could use Beagle to index them, and then just use the search functionality that comes with it. It has an (experimental) web search interface, and it can be hooked into the FireFox search box as well.
It automatically indexes files as they're included, and I'd suspect that you'll find it much more efficient to enhance or fix beagle than to write your own search interface to Lucene.
Having the (imho) distinct advantage of being on a Mac, I use SearchLight on a somewhat older G5. nice web interface to spotlight, the Mac OS' built-in indexing service.
How do you guys manage the information overflow?
What are the tools that you guys use?
One of the usefull tool is RSS feed reader.
Does Any body uses any other tools or any other ways to effectively manage the information?
Be an information snob.
If the blog doesn't absolutely rock your world, don't read it. It's so easy to get bogged down, even obsessed, with too much information. No matter what tools you have, you're still human and can only read so many words per day.
I use Evernote to keep notes and search through them.
I use Google Reader for the feeds. Split it up in multiple categories, 'A' with the more unique stuff, 'B' with the spam (Digg for example, easy to ignore because the important stuff shows up in 'A'), 'C' for my webcomics.
I always read the stuff in 'A', when bored I read 'C' and 'B' when I have spare time. It happens a lot of time that I'll mark 'B' as read just to get rid of it.
For work I'm stuck with Outlook, so I use the 'Tasks' function of Outlook a lot to get things sorted. Also a big believer of 'Inbox Zero' (http://www.43folders.com/izero).
I use a small number of tools and techniques, because it is easy to get distracted managing the information management tools, rather than managing the information.
Google Reader - The key for me was creating #work and #home labels, for the appropriate location.
TiddlyWiki - I keep track of all my notes for work projects in a TiddlyWiki file.
Delicious - I keep my bookmarks here. When I come across a link I want to read later (usually in my RSS Reader), I tag it #readreview. When I read it, I delete it unless it is useful reference, then I retag appropriately.
Local bookmarks - I store bookmarks on the browser toolbar in folders so I can middle-click and open all in tabs. Obviously these would be limited in number :-). I also have a bookmarklets folder.
I don't have a PDA. I have a pad of graph paper on my desk that I use for writing temporary notes and diagrams (permanent notes go into the TiddlyWiki). A lot of "productivity blogs" like to promote various tools, and some of these caught on for people, but I find my system is pretty simple and easy for me to manage. This makes it useful.
Well, this is an obvious one, but iGoogle seems to do a great job for me.
Depends on what information you are looking to manage. Can you be more specific?
I use google reader to handle things i read, RememberTheMilk to remind me of what i have to do, and gmail overall to quickly store and search data/correspondences.
Oh and i use the hipster PDA too!
You should probably check out Lifehacker for more tools and Getting Things Done apps.
Like you say most sites have a RSS feed today. Get a RSS Reader that sync between computers if you use more than one computer, so you don't have to mark alot of post as read. A good program is FeedDeamon, its free and sync between computers, there is even a online version as well, if you are on the road. FeedDeamon also have tools to help you identify the feeds, that you dont really read, and gives you a top 10 of feeds that you look on alot. This can help you delete bad RSS-Feeds, and also help you organize you're feeds.
I also use Delicious, to keep my bookmarks in sync, and is very handy if you bookmark alot.
Other than that, I don't really use any more tools - just the common sence that there is only 24 hours in the day, so dont use it to just read information that you don't need - bookmark interesting blog post from RSS, and read them later when you need to.
I've been using Delicious quite a bit over the past 2 years and it's been a great help.
If you're primarily interested in blogs, what I think we need is a way to prioritize the information that we, personally, are interested in. There used to be an RSS reader called wTicker (now demised) that used Bayesian filtering to rate articles for you. Another product under development, Particls, would similarly watch what you read and highlight similar content.
What about other types of information, though? For example, the tasks that OneNote or EverNote, or more obscure tools like Zoot aim to facilitate?
It depends on the type of information you meant. The answers above contain most of the tools. But if you use ms office you shall explore Office OneNote.
iGoogle: News, RSS, Wether, New Films, E-Mail widgets
ToDoList: every day work aspects
Local MediaWiki, for local company knowleges
Smartphone MS Excel for personal finances.
I still read news and blogs from RSS feeds. Feedly is the best tool for that right now.
When I find something interesting in Feedly, I add it to Pocket and read later. A Premium account allows me to highlight paragraphs I would like to save.
I also set up a receipt on IFTTT that monitors my likes on Twitter and adds links from the liked tweets to Pocket too.
As Substack grows, there is the new email newsletter boom. But my inbox is also a place where I do my work. So, I wrote an apps script file to receive newsletters once or twice a day and prevent them from distracting me from work. And then, I published it as Silent Inbox add-on that plugs into your Gmail.