I spent about a half hour surfing the various Glassfish web sites, but i was unable to find the source code online.
I don't want to download the code, I just want to look at a couple specific spots.
Is there similar to mxr.mozilla.org?
http://java.net/projects/glassfish/sources/svn/show
It's now here https://svn.java.net/svn/glassfish~svn/
Is this what you are looking for:
https://glassfish.dev.java.net/source/browse/glassfish/
EDIT: There seems to be a migration going on java.net, as per:
http://terrencebarr.wordpress.com/2010/11/12/please-read-java-net-migration-move-it-or-loose-it/
You can use the fisheye:
https://fisheye4.atlassian.com/browse/glassfish-svn
Probably best to take some tag from the tree on the left.
Related
I'm attempting a scrapy-with-splash project to get a few fields off the website "https://sailing-channels.com/by-subscribers". This site uses java to retrieve and delete listings as you scroll.
I've not had any luck getting the splash server to give me the whole set of data, or any of the detailed listings for that mater.
My first question is can splash even do this?
I really don't care how I get this data. I would prefer doing it with a program but any tool that can get me fields from this site in an .csv file would do the job. Anyone have any suggestions?
Thanks for any advice
Why do you want render it? They have pretty good API, check https://sailing-channels.com/api/channels/get?sort=subscribers&skip=0&take=5&_=1548520116425. So you can iterate, increasing skip argument and parsing json each time.
Looks like very promising way.
I am trying to test ODL-SDNiApp and found it not updated since Helium on this page https://wiki.opendaylight.org/view/ODL-SDNiApp:User_Guide. So, is it still supported by Opendaylight? if not, please list me some useful tools or methods for inter SDN controller communication.
Thanks.
According to the project page https://wiki.opendaylight.org/view/ODL-SDNi_App:Main, it last participated in the Boron release but it doesn't look like it's been active since. You can try the project mailing list or contact the listed committers. If it is inactive as it appears, perhaps you might want to try to reboot it.
on my blog I’m using from a long time the IntenseDebate pluging as commenting system in place of the default one.
I would replace it with Google+ comment system but I don’t want to lose all comments already left by the users via IntenseDebate, so I would figure out if there’s any way to load on the old posts the IntenseDebate pluging in place of the default Google+.
As possible solution, I’m thinking something like a tag in the html post code that (if defined) load the IntenseDebate pluging.
What do you think?
its not posible to migrate IntenseDebate comment on google plus. Their is one solution that you can use multiple comment system in your blogger blog. just few month ago i had written trick for the same. I hope that this will be useful to you.
http://www.tipsviablogging.com/multiple-comment-system-blogspot/
Does anybody have any idea if there exists a demo of a Kohana admin system? (i am thinking at one admin system like Django has one). I am building an online store, and i need a quick way to manage the products inside it. Is there any chance for me to be able to use the Kohana admin system in order to perform this task?
thanks!
i don't know any kohana admin.
the kohana auth will help you to create a security log-in but kohana don't have any pre-builded admin or scaffold.
The cause of kohana don't have any scaffold system or admin is because you will spend more time changing or adapting the admin/scaffold to your needs than do it from zero.
Kohana give you very very impressive tools to build and validate the forms.
If you read the docs carefully and you understand the docs you can program your own admin in less than half an our i guarantee you
A few months ago I was looking for something exactly what you are asking about. I was unable to find a finished product but I found many pieces. I have since been glueing them together as I see fit into the night. The project is quite a buggy mess right now but it works for my purposes. Once I get it in better shape I planned on posting it to my website or maybe github if I ever figure it out that is...
Lately however my job has gotten a bit stricter in regards to coming in early... so I can't code away into the night like I used to. Also, the last component: jQuery mobile UI is still in Alpha 3 so I'm in no major rush either...
My main questions are...
The following are the components it utilizes, will those work with your project?
If 1 == true... and this is for a potential project when would you need/like this module by?
Jelly -->
https://github.com/jonathangeiger/kohana-jelly
Formo -->
https://github.com/bmidget/kohana-formo
Formo-Jelly -->
https://github.com/bmidget/kohana-formo-jelly
Jelly-Auth -->
https://github.com/raeldc/jelly-auth
Jelly-Auth-Demo -->
https://github.com/rob/jelly-auth-demo
A neat admin template style -->
http://mathew-davies.co.uk/2010/03/13/free-admin-template.html
A x12 Grid from -->
http://960.gs/
jQuery Mobile UI Elements -->
http://jquerymobile.com/
Posted as answer instead of a message, just in case others are interested... Gauging the amount of interest will likely determine the amount of evening time sacrificed.
Kohana does not have a magical admin system like Django.
I'd like a way to download the content of every page in the history of a popular article on Wikipedia. In other words I want to get the full contents of every edit for a single article. How would I go about doing this?
Is there a simple way to do this using the Wikipedia API. I looked and didn't find anything the popped out as a simple solution. I've also looked into the scripts on the PyWikipedia Bot page (http://botwiki.sno.cc/w/index.php?title=Template:Script&oldid=3813) and didn't find anything that was useful. Some simple way to do it in Python or Java would be the best, but I'm open to any simple solution that will get me the data.
There are multiple options for this. You can use the Special:Export special page to fetch an XML stream of the page history. Or you can use the API, found under /w/api.php. Use action=query&title=$TITLE&prop=revisions&rvprop=timestamp|user|content etc. to fetch the history.
Pywikipedia provides an interface to this, but I do not know by heart how to call it. An alternative library for Python, mwclient, also provides this, via site.pages[page_title].revisions()
Well, one solution is to parse the Wikipedia XML dump.
Just thought I'd put that out there.
If you're only getting one page, that's overkill. But if you don't need the very very latest information, using the XML would have the advantage of being a one-time download instead of repeated network hits.