Actually i am facing a problem that i amnot sure from where it is? as i am new to Extjs
I am using TreeGrid of Extjs4. I have a combobox, where i have to select an option and do a search operation.on search it will populates the TreeGrid.
But the problem i am getting when i have a huge xml files that i need to populate in to TreeGrid. Its taking toomuch time. So can anyone help me on this please to identify what may the problem?
Where as in the case small xml file it is working good.
I too have found problem with loading large files. If your files are too large dont stick with XML.
Try to use JSON format. it will perform better with large files.
To read XML you need to parse it, read the nodes, attributes, and child nodes in the XML document, and then use the data that you’ve found.
With JSON it’s easy to get at the data since its already native javascript. No parsers or proxies necessary–all you need to do is loop through the data, fast and simple.
http://think2loud.com/680-json-xml/
Related
I have several pieces of data that need to be merged into one file (ATContentTypes blob file, Plone 4.1). The total amount of data is likely to be quite large so I really don't want to have to load it all into memory, concatenate it, and do something like o.setFile(data). If I were writing directly to the file system I could just do open(myfile, 'a') and write to it, but I'm not clear how I could do that with a blob supported content type. All of the docs and tests I've been able to look at just have it being set with a str or in-memory StringIO. Is there a way to append to this field without loading the whole thing into memory?
Similarly, I've also looked at using Dexterity with a plone.namedfile NamedBlobFile. It looks like that field just has a 'data' attribute that is basically a string. How could I append to that without loading the whole thing into memory?
It's quite old and the product has never been officially released, but it can help you: ore.bigfile.
It's well explained in this blog article: http://blog.jazkarta.com/2010/09/21/handling-large-files-in-plone-with-ore-bigfile/
Like many web developers I do forms all the time. I found myself doing the same all the time: placing input fields, assigning a name to each, ajax the form, then create the php which involves to assign a $php var to each $_REQUEST['var'], escape and validate data, build the html and emailing the results...
So I find that 70% of the work I repetitive but I just can't duplicate a page and change the fields. I end up wasting more time reformatting, deleting and adding different fields than creating from scratch.
I started planing to program a "list of IDs to html+php" converter in which I'd input all the IDs and this would output the basic html and php. Then I thought: there's got to be thousands of developers that go through this, I'd be reinventing the wheel. So this is my question, I'm trying to find that wheel that somebody must have invented already.
I found this: http://www.trirand.com/blog/jqform/ which does more or less what I'm looking for but it's an expensive solution and it has too much functionality for what I'd be using it.
Which tools do you use to optimize repetitive task about html and php?
Creating forms using plain HTML is cumbersome and time consuming. The task will be much simpler if you use an open source form library. I use Zend_Forms extensively. You could also look into the one provided by EZ Components
These form libraries allow you to specify various form elements, validators for each element and data filters (string tags, lowercasing etc...). Once you have specified these the library automatically handles the rendering of forms and further if there are errors it will renders the errors as well. Usually these libraries render the forms with certain predefined mark-up (HTML) but it's configurable as well.
If you start using one of these libraries you would save a lot of time while creating forms. In-fact I would suggest you to use a framework such as Zend or Symfony for your projects.
I resort to your expertly advice because I am sort of "new" to Objective-C, I have read a couple of books and docs (namely Aaron Hillegass & Stephen G. Kochan's books), but some things are still unclear to me, for lack of practise.
To put you in context, I have a NSDocument project that uses Core Data for storage.
I struggle with 2 things right now: reading/writing to files, and table views ^^
So my first question is about Core Data : is it only able to save in SQL, XML or Binary format ?
Or can I use core data to read/write in any format, according to what I declared in the plist file ?
I am trying to work with .po files, and I want to display the translations in a table view containing 2 columns (1 for the msgid and the other for the msgstr).
To read and write files in the po format and display lines in my table view, I most likely need to parse the files using line endings and characters such as "#"as delimiters.
I haven't gotten around to doing that yet (I have no idea how to do that yet!), but I would like to know if it is possible or if I need to restart my project that doesn't use Core Data...
Please DO NOT just throw links to the apple documentation at me, it's the most confusing thing ever, and feels like it's made for experts only! I need me some human-readable explanations :)
Thanks a bunch for any help and advice you can give me!
It is possible to write a different storage format for Core Data, but it is not easy and it sounds like you are not at a level where that is a possibility (no shame there, I'm not either).
If you are only displaying data from the .po files then there is no need to use CoreData. CoreData is meant to provide a file storage solution. You create/edit data and save it using coredata. If you have no intention to create and edit data then get rid of coredata, it will only get in the way.
I'm working on a project for the iPad, I need to read and write to an xml file, which is also used by the counter part of the application in windows.
The problem that I have is that I've been looking around but I haven't found a way to modify an element or attribute in an xml, without having to build the whole xml again.
I saw this other post, which is basically the same problem that I have, and I also end it up in the same point as the person asking the question, NSXMLParser and TouchXML are read only and do not allowed me to modify my xml.
Any other suggestion about what can I use?
Thanks!
I want to be able to generate a highly graphical (with lots of text content as well) PDF file from data that I might have in a database or xml or any other structured form.
Currently our graphic designer creates these PDF files in Photoshop manually after getting the content as a MS Word Document. But usually, there are more than 20 revisions of the content; small changes here and there, spelling corrections, etc.
The 2 disadvantages are:
1) The graphic designer's time is unnecessarily occupied. The first version is the only one he/she should have to work on.
2) The PDF file becomes the document which now has the final revised content, and the initial content is out of sync with it. So if the initial content needs to be somewhere else (like on a website), we need to recreate it from the PDF file.
Generating the PDF file will help me solve both these problems. Perhaps some way in which the graphic designer creates a "Template" and then puts in tags/holders and maps these tags/holders to the relevant data.
Thanks :-)
There are some tools out there for doing this. XSL-FO is useful. Here is a tutorial for creating a pdf from xml (or xhtml) with cocoon. Also see Apache FOP.
You could format your SQL data as XML and still use the same templates this way.
I use the ReportLab python library for this. It could perhaps solve your problem, but you will need to do some work...
In the past I have written scripts that spit out LaTeX then used texi2pdf to solve this kind of problem.
Take a look at iReport and JasperReports at http://jasperforge.org.
iReport lets you design reports, and then you can either programatically fill it with the JasperReports library (Java), or just use iReport to manually create the report.
I have only used it for tabular data, but I don't think there would be any problem for other types of documents.
You could create a form and populate the entries programmatically using a pdf library like iText (Java).
You could look at doing the workflow in PostScript which is plain text that you can easily compose from fragments. Then you can use any free tool to convert to PDF.
Take a look at Prince XML. This tool allows to generate PDF based on XML or HTML and CSS.
A possible way is to use a template engine, like FreeMarker or StringTemplate: these are often used to generate HTML, but they are flexible enough to output any format, actually.
The problem is to make a PDF template, I suppose. Perhaps you can take a sample output and edit it to replace data with placeholders to be filled by the template engine. Might not be trivial!
Sounds like a job that SQL Server Reporting Services can handle quite easily.
Reporting Services allows you to query the data, define the layout, and export to PDF without any intervention. The PDF output can be distributed via email, stored on a file share, and accessed via a page on the report server.
It can handle XML data sources too.
Another approach to generating a PDF file from data is to use prawn, which is based on ruby. I was very pleasantly surprised by how much functionality is included in prawn. It may take some investment up front but this approach will give you a lot of flexibility.
You can combine CSStoXSLFO with XEP from RenderX for high quality output. With this solution you can merge XML data into an XHTML template, which is decorated with CSS. It can also generate charts with the fantastic JFreeChart library. CSS3 page media features are supported.