I am trying to write some contents from one csv file to another csv file using BeanIO. I am able to get the contents but the header is not writing to destination file. I don know how to fix this. Please some one help me on this. Following is the code
StreamFactory factory = StreamFactory.newInstance();
factory.load("config" + File.separatorChar
+ CSVMain.prop.getProperty("ordersmapping"));
orderWriter = factory.createWriter("salesOrder", new File(property));
for (int i = 0; i < orders.size(); i++) {
orderWriter.write(orders.get(i));
}
orderWriter.flush();
orderWriter.close();
the code is written inside a method. And I also want to remove the carriage return(\r) from the output.
Thanks in advance.
I got the answer from the Google Groups thread which utilizes a class for the header and then sets the fields to ignore, basically overriding. I did not want to create a dedicated class so instead I re-used the map class as follows:
<stream name="XYZ" format="csv">
<parser>
<property name="alwaysQuote" value="true" />
</parser>
<record name="header" class="map" order="1" minOccurs="1" maxOccurs="1">
<field name="Name" default="Name" ignore="true"/>
<field name="Surname" default="Surname" ignore="
</record>
<record name="record" class="map" order="2">
<field name="Name"/>
<field name="Surname"/>
</record>
</stream>
You may use the this util method to easily create a Header without any additional class or XML configuration.
public static void main(String[] args) {
final String factoryName = "comma delimited csv factory";
final String headerName = "CarHeader";
final var builder = new StreamBuilder(factoryName)
.format("csv")
.addRecord(Headers.of(Car.class, headerName))
.addRecord(Car.class)
;
final var factory = StreamFactory.newInstance();
factory.define(builder);
final ByteArrayOutputStream bout = new ByteArrayOutputStream();
final BeanWriter writer = factory.createWriter(factoryName, new OutputStreamWriter(bout));
try {
writer.write(headerName, null);
writer.write(new Car("Ford Ka", 2016));
writer.write(new Car("Ford Fusion", 2020));
} finally {
writer.close();
}
System.out.println(bout.toString());
// Model,Year
// Ford Ka,2016
// Ford Fusion,2020
}
Related
I have code that looks like this:
public void testSomething1() {
}
public void testSomething2() {
}
public void testSomething3() {
}
And I want the result to look like this with find and replace:
public void testSomething2() {
}
public void testSomething3() {
}
public void testSomething4() {
}
I need this because I just realized I needed to add another test between 2 and 3 and leave the rest intact. I wish I could add it at the end, but I can't, I have to use this test generation tool and implement the tests for a school project. I have 51 tests and I don't want to shift n + 1 manually for ever test, especially since theres a comment referring to the number. Oh please tell me there's a way T_T. Thank you!
You could use Edit | Find | Replace Structurally for this. Use a template like the following:
<replaceConfiguration name="Counting" text="void $method$();" recursive="false" type="JAVA" pattern_context="member" reformatAccordingToStyle="false" shortenFQN="false" replacement="void $replacement$();">
<constraint name="__context__" within="" contains="" />
<constraint name="method" regexp="(testSomething)(\d+)" within="" contains="" />
<variableDefinition name="replacement" script=""def (_,name, number) = (method.name =~ /(\p{L}+)(\d+)/)[0]
name + (Integer.parseInt(number) + 1)
"" />
</replaceConfiguration>
You can copy this pattern and use the Import Template from Clipboard action in the menu under the tool button of the Structural Search dialog.
I'm trying to use the JPOS library to pack/unpack ISO8583-1987 messages.
I have a problem with the format, and i can't find any running example on the internet.
Could someone give me a running example of Unpacking a hexadecimal message, because there is a lot of examples with ASCII message, but this is not what i need.
Thank you all for your time & attention
Julien
I'm assuming you have the hex string representing the message in a String, in that case you have to convert it into a byte array.
For example assuming you have the string as an argument to your main. Anyhow you have to know the format of the iso message contained in that hex representation. For example if the message is binary you have to choose ISO87BPackager, if it is ascii you have to choose ISO87APackager.
import org.jpos.iso.packager.ISO87BPackager;
import org.jpos.iso.ISOException;
import org.jpos.iso.ISOMsg;
import org.jpos.iso.ISOUtil;
public class ParseISOMsg {
public static void main(String[] args) throws ISOException {
String hexmsg = args[0];
// convert hex string to byte array
byte[] bmsg =ISOUtil.hex2byte(hexmsg);
ISOMsg m = new ISOMsg();
// set packager, change ISO87BPackager for the matching one.
m.setPackager(new ISO87BPackager());
//unpack the message using the packager
m.unpack(bmsg);
//dump the message to standar output
m.dump(System.out, "");
}
}
For example if you call java -cp .:jpos.jar ParseISOMsg 080000200000008000001234563132333435363738 it should print:
<isomsg>
<!-- org.jpos.iso.packager.ISO87BPackager -->
<field id="0" value="0800"/>
<field id="11" value="123456"/>
<field id="41" value="12345678"/>
</isomsg>
I declared a variable below:
import os
......
class product(osv.osv):
......
file_import = fields.Binary(string="File")
#api.multi
def save_file(self):
# do something
If I declare variable above, can I get extension file_import?
create new field for storefile name and set into xml.
Example
----Python-----
import os
......
class product(osv.osv):
......
file_import = fields.Binary(string="File")
filename=fields.char('Filename')
------XML-----
<field name="filename" invisible="1"/>
<field name="file_import" filename="filename"/>
So, when you upload file file_import field it will automatically store file name into filename field. From filename you can get its extension.
Hope this helps.
I'm using Solr 4.6 example's SimplePostTool to import documents from the filesystem to Solr. All it's ok, but the field last_modified is filled only when the original document has metadata for it. If the field is not present Solr extractor leaves the field blank.
I tried to modify SimplePostTool to set this field using the file system modification date, but then I get this error when I try to import files that already have last_modified field from the metadata:
430584 [qtp1214238505-16] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: ERROR:
[doc=4861976] multiple values encountered for non multiValued field
last_modified: [2013-12-22T14:03:10.000Z, 2013-07-02T11:29:20.000Z]
I'm thinking about using a custom field for file system date, but in my case, metadata date if preferable when is available. Is there any way to merge them at import time?
Thanks!
You can set a default value in your schema. Something like this should work:
<field name="my_date" type="date" indexed="true" stored="true" multiValued="false" default="NOW" />
Field Type Definition:
<fieldType name="date" class="solr.TrieDateField" sortMissingLast="true" omitNorms="true"/>
while creating a document the solr takes all input as text and then validates according to the given data type , Hence any form of valid date format accepted ,would work fine with the solr .
For current time
Any default value
regards
Rajat
I finally solved the issue creating a custom Update Request Processor, as explained here: http://wiki.apache.org/solr/UpdateRequestProcessor
My processor is as follows:
package com.mycompany.solr;
import java.io.IOException;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.SolrQueryResponse;
import org.apache.solr.update.AddUpdateCommand;
import org.apache.solr.update.processor.UpdateRequestProcessor;
import org.apache.solr.update.processor.UpdateRequestProcessorFactory;
public class LastModifiedMergeProcessorFactory
extends UpdateRequestProcessorFactory {
#Override
public UpdateRequestProcessor getInstance(SolrQueryRequest req,
SolrQueryResponse rsp, UpdateRequestProcessor next) {
return new LastModifiedMergeProcessor(next);
}
}
class LastModifiedMergeProcessor extends UpdateRequestProcessor {
public LastModifiedMergeProcessor(UpdateRequestProcessor next) {
super(next);
}
#Override
public void processAdd(AddUpdateCommand cmd) throws IOException {
SolrInputDocument doc = cmd.getSolrInputDocument();
Object metaDate = doc.getFieldValue( "last_modified" );
Object fileDate = doc.getFieldValue( "file_date" );
if( metaDate == null && fileDate != null) {
doc.addField( "last_modified", fileDate );
}
// pass it up the chain
super.processAdd(cmd);
}
}
Where file_date is a field I set with the file modification date at import time.
how can access/pass a value object in a custom itemrenderer? The item renderer represents a field in my datagrid and i want to be able access properties from my VO.
Thanks
You will want to override the set data method of the item renderer:
<?xml version="1.0" encoding="utf-8"?>
<mx:Canvas xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo">
<fx:Script>
<![CDATA[
//Strongly typed VO for use in binding.
[Bindable]
private var myValueObject:MyValueObject;
override public function set data(value:Object) : void
{
//we don't want to update if the value is the exact same.
if(data === value)
return;
//you could simply access the data property but I think
//it is nicer to have strong typing for code hints
super.data = myValueObject = value;
validateNow();
}
]]>
</fx:Script>
<mx:Label text="{myValueObject.name}"/>
</mx:Canvas>
What about this approach?
Item renderer ItemRendererLink.mxml:
public function goTo():void {
Alert.show(goToScreen + " " + data.item_num);
}
]]>
</mx:Script>
</mx:LinkButton>
parent component:
<mx:DataGridColumn dataField="item_num"
headerText="Item #">
<mx:itemRenderer>
<mx:Component>
<e:ItemRendererLink goToScreen="item"/>
</mx:Component>
</mx:itemRenderer>
</mx:DataGridColumn>