where to get sample ofx files for testing? - ofx

I am building a php application using Ofx Parser Class from http://www.phpclasses.org/package/5778-PHP-Parse-and-extract-financial-records-from-OFX-files.html . But where can i get a sample ofx file to use this class and test my application?

Try searching "filetype:ofx" in google. I have found a couple there. If you need a whole bunch for a more complete test I don't know.

Easiest by far is to have an online bank account yourself that supports ofx downloads. But you're right; it's surprisingly difficult to find anything past a simplest case online.
I dug up this article on IBM developerWorks that includes a quick sample. It's on parsing ofx with php and helpfully shows the difference between a well-formed XML version of an ofx and the starting-tag only version you'll often find when you download from various banks, but even this sample is only one withdrawal and one deposit.

Try using https://github.com/wesabe/fixofx. It has a script called fakeofx.py
The fakeofx.py script generates real-ish-seeming OFX for testing and
demo purposes. You can generate a few fake OFX files using the script,
and upload them to Wesabe to try it out or demonstrate it without
showing your real account data to anyone.
The script uses some real demographic data to make the fake
transactions it lists look real, but otherwise it isn't at all
sophisticated. It will randomly choose to generate a checking or
credit card statement and has no options.

These are the two references I used. The first one is about the structure of and ofx file and the second one give you the connection information for the financial institutions.
http://www.ofx.net/
http://www.ofxhome.com/

Related

Migrate from youtrack to jira

After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer

How to access results of Sonar metrics for use with applications like PowerPivot

I'm trying to run a number of applications with known failure rates through Sonar, with hopes of deciding which metrics are most valuable in determining whether a particular application will fail. Ultimately I'll be making some sort of algorithm that will look at the outputs of whatever metrics I'm using and generate a score from 1 - 100. I've got about 21 applications put through Sonar, and the results have been stored in a MySQL database. I originally planned to use PowerPivot to find relationships in the data, but it seems like the formatting of the tables doesn't lend itself well to that. Other questions on stackoverflow have told me that Sonar's tables are unformatted, and I should instead use the Web Service API to get the information. I'm unfamiliar with API and was unsuccessful in trying to do what I wanted by looking at Sonar's documentation for API.
From an answer to another question:
http://nemo.sonarsource.org/api/timemachine?resource=org.apache.cxf:cxf&format=csv&metrics=ncloc,violations_density,comment_lines_density,public_documented_api_density,duplicated_lines_density,blocker_violations,critical_violations,major_violations,minor_violations
This looks very similar to what I'd like to have, except I'm only looking at each application once (I'm analyzing a sample of all the live applications on a grid), which means Timemachine isn't really what I'm looking for. Would it be possible to generate a similar table, except instead of the stats for a particular application per date, it showed the statistics for an application and all of its classes, etc?
If you're not familiar with the WS API, you can also create your own Sonar plugin to achieve whatever you want: it is written in Java and it will execute on every analysis you run. This way, in the code ot this custom plugin, you can do whatever you want: flush the metrics you need in an output file, push them into a third party system, ... etc.
Just take a look on how to write a plugin (most probably you will create a Decorator). You have concrete examples also to get started faster.

How do I access SQL from XPages

What is the process to access data from a SQL data source and have it fill in a list box control so that the user may select one of the values?
I have been given the name of the database and server, the login ID and password.
Code samples would really be appreciated as I have never done any SQL coding.
The latest Extension Library on OpenNTF ( extlib.openntf.org ) has a whole bunch of Relational Database extensions.
You'll need to get the JDBC drivers for whatever SQL server your going to be accessing and then take a look at the ExtLib demo application on how to create the JDBC connector from your application. Once the connector is in place you can then just the new controls in ExtLib to easily create a view pane etc.
You will also need more then the SQL server, username and password, you'll need to find out the different tables that you'll be accessing so that you can reference them from your Xpages application.
I've created a video showing JDBC access from XPages: http://www.youtube.com/watch?v=p6oRCsTsVqc
Wait for the book that will e released soon about the extlib. I know Jeremy hodge wrote the chapter so you might be able to get some info from him.
From an answer I gave earlier: you might want to check out the blog post announcing the JDBC support . It has an excellent video explanation and a link to a slide deck.
Also, take a look at Xpages101 lesson 61. It's paid-for content, but well worth it if you're serious about Xpages development.
If you want to combine Upgrade Pack 1 (UP1) with the Extension Library JDBC parts, then make sure to use the Extension Library that matches exactly the UP1 version. This is version 853-20111215 of the Extension Library. Then you can use the update site method to only deploy the experimental parts of the Extension Library (com.ibm.xsp.extlibx.feature_8.5.3.20111215-0914.jar).
For newer releases of Extension Library things might (will) have changed so that UP1 and Extension Library can not work together.
When UP2 is released, you need to remove the Extension Library package and deploy UP2. At that point in time UP2 might contain the JDBC support.
Roy,
As the previous posters put the ext library stuff will make it a little more "Drag and Drop", but you can use regular JDBC connection to get the data you want, Its pretty simple, but a lot more code than using Domino as a backend. You might want to look at this John Mackey blog post about doing a very similar thing...http://www.jmackey.net/groupwareinc/johnblog/johnblog.nsf/d6plinks/GROC-7G9GT4
Keep in mind that you need the actual ext. library for this. The upgrade pack does not contain the JDBC stuff.
Edit:
Keep in mind that if you don't need "LIVE" data access, and the information you want is fairly static you could always just use a lotusscript agent to pull the data down into Notes Documents. Run that once a day or whatever. No fancy XPages stuff needed. That's fairly common coding and practices with examples available.
Then just have the list box pull from the documents you brought down.

How to set up development environment for yql and open table development? How to test it locally? (best practice)

I develop -- from time to time -- yahoo open tables to access different resources on the web. Currently I am using a JavaScript editor and -- when I want to test if my open table works -- I upload the xml table description to a server to test it with a yql client application. However this approach is quite slow and -- sometimes -- I get blocked by yahoo, because of a mistake in my open table description. Therefore I would like to learn about best practices on how to test and develop yahoo open table locally. How does look your set up for the open table development?
To clarify my question, I am looking for any convenient way (best practise) to develop and test yql tables, e.g., running a part of a Java Script inside Rhino.
First of all: I agree that I don't see a really convenient way either to test YQL datatable definitions locally. Nevertheless, here is how I approach this issue.
Hosting on github
YQL datatable definitions are often used in very open scenarios e.g. when there is an existing API that you want to wrap via YQL. Therefore I am normally working on a fork of the YQL community tables and I just add my own definitions there. The hosting of the .xml files takes place on github in this case: https://github.com/yql/yql-tables
The other advantage of this approach is as well that it is easy for me to share my datatables with the community if I feel that they might be valuable for others as well.
Hosting privately
The free github account only comes with free repositories though, so everybody would be able to see and use your datatables. If that is not good for you then you could either buy a github pro account to get private repositories, or host the datatable definitions yourself.
To do that you can upload them to your own server - as you are already doing - or you should also be able to set up a web server like Apache locally on your machine and then get a dynamic hostname from dyndns.com or similar, so that you can point to this definitions from YQL. I have not tried this because github was working sufficiently well for me but I am sure that it is possible.
Why don't you just put the file you are editing in a public dropbox folder? This is what I do and it works pretty well.

Automating WebTrends analysis

Every week I access server logs processed by WebTrends (for about 7 profiles) and copy ad clickthrough and visitor information into Excel spreadsheets. A lot of it is just accessing certain sections and finding the right title and then copying the unique visitor information.
I tried using WebTrends' built-in query tool but that is really poorly done (only uses a drag-and-drop system instead of text-based) and it has a maximum number of parameters and maximum length of queries to query with. As far as I know, the tools in WebTrends are not suitable to my purpose of automating the entire web metrics gathering process.
I've gotten access to the raw server logs, but it seems redundant to parse that given that they are already being processed by WebTrends.
To me it seems very scriptable, but how would I go about doing that? Is screen-scraping an option?
I use ODBC for querying metrics and numbers out of webtrends. We even fill a scorecard with all key performance metrics..
Its in German, but maybe the idea helps you: http://www.web-scorecard.net/
Michael
Which version of WebTrends are you using? Unless this is a very old install, there should be options to schedule these reports to be emailed to you, and also to bookmark queries. Let me know which version it is and I can make some recommendations.