I declared a variable below:
import os
......
class product(osv.osv):
......
file_import = fields.Binary(string="File")
#api.multi
def save_file(self):
# do something
If I declare variable above, can I get extension file_import?
create new field for storefile name and set into xml.
Example
----Python-----
import os
......
class product(osv.osv):
......
file_import = fields.Binary(string="File")
filename=fields.char('Filename')
------XML-----
<field name="filename" invisible="1"/>
<field name="file_import" filename="filename"/>
So, when you upload file file_import field it will automatically store file name into filename field. From filename you can get its extension.
Hope this helps.
Related
I have created a new field in sale.order.line
_columns = {
'od_deivered_quantity':fields.float('Delivered
Quantity',trackvisibility = onchange,readonly = False)
}
then I wrote the following onchange function:
#api.depends('product_uom_qty')
def onchange_delivered_order(self, cr, uid, ids,context=None):
res = {}
delivered_qty = self.product_uom_qty
return {'value':{'od_delivered_qty':delivered_qty}}
XML code as follows:
<xpath expr="//field[#name='order_line']/tree//field[#name='product_uom_qty']" position="after">
<field name = "od_delivered_qty"/>
</xpath>
but it does not works
Sorry but your code is a bit messy and buggy; furthermore it looks like you're mixing up old API style (pre-v8), as during fields declaration, and new one, for onchange method.
Let's recap, I wonder whether I understood well your requirements: you need a new field od_deivered_quantity being triggered by product_uom_qty. Is that right?
I suggest something like the following chunks (I'm gonna use new API style):
od_deivered_quantity = fields.Float(
"Delivered Quantity", track_visibility="onchange", readonly=False
)
#api.onchange('product_uom_qty')
def onchange_delivered_order(self):
# if isinstance(self.product_uom_qty, bool):
# return
delivered_qty = self.product_uom_qty
self.od_delivered_qty = delivered_qty
Please, try to inspect to this onchange method behaviour when it'll be up and running in your module: I put a commented boolean check into that, in case (because of any awkward reason) the method will be called passing a False value on product_uom_qty.
Talking about views, your XML may be OK because in new API style https://www.odoo.com/documentation/8.0/reference/orm.html#onchange-updating-ui-on-the-fly
both computed fields and new-API onchanges are automatically called by
the client without having to add them in views
Let me know whether it will work fine or you'll face any problems.
Does anybody know if there is a simple way to make the filename element path of an file sink variable according to one field in the model?
So instead of using a fix path like :
fixed_path/filename.csv
{variable_path}/filename.csv
The attached ModelInitializer shows how to do this. I also copied the code below in case attachments doesn’t go through. To enable the ModelInitializer, you need to add the following to the scenario.xml file:
<model.initializer class="PredatorPrey.MyInitializer" />
I tested this in the Predator Prey demo so you should change the class package name. In the ModelInitializer example, you need to specify the root context ID which is the same as the context ID in the context.xml file. And you should specify the variable output folder name. This example requires a file name to be specified in the file sink as you would normally do and it inserts the variable path. On caveat is that the variable folder path will be saved in the scenario if the scenario is saved in the GUI, however this code will check for any existing path and simply replace the path with the outputFolder string. So you should put the entire path in the outputFolder string and not just part of it, or change the code behavior as needed.
package PredatorPrey;
import java.io.File;
import repast.simphony.data2.engine.FileSinkComponentControllerAction;
import repast.simphony.data2.engine.FileSinkDescriptor;
import repast.simphony.engine.controller.NullAbstractControllerAction;
import repast.simphony.engine.environment.ControllerAction;
import repast.simphony.engine.environment.RunEnvironmentBuilder;
import repast.simphony.engine.environment.RunState;
import repast.simphony.scenario.ModelInitializer;
import repast.simphony.scenario.Scenario;
import repast.simphony.util.collections.Tree;
public class MyInitializer implements ModelInitializer {
#Override
public void initialize(final Scenario scen, RunEnvironmentBuilder builder) {
scen.addMasterControllerAction(new NullAbstractControllerAction() {
String rootContextID = "Predator Prey";
String outputFolder = "testoutfolder";
#Override
public void batchInitialize(RunState runState, Object contextId) {
Tree<ControllerAction> scenarioTree = scen.getControllerRegistry().getActionTree(rootContextID);
findFileSinkTreeChildren(scenarioTree, scenarioTree.getRoot(), outputFolder);
// Reset the scenario dirty flag so the changes made to the file sink
// descriptors don't prompt a scenario save in the GUI
scen.setDirty(false);
}
});
}
public static void findFileSinkTreeChildren(Tree<ControllerAction> tree,
ControllerAction parent, String outputFolder){
// Check each ControllerAction in the scenario and if it is a FileSink,
// modify the output path to include the folder
for (ControllerAction act : tree.getChildren(parent)){
if (act instanceof FileSinkComponentControllerAction){
FileSinkDescriptor descriptor = ((FileSinkComponentControllerAction)act).getDescriptor();
String fileName = descriptor.getFileName();
// remove any prefix directories from the file name
int lastSeparatorIndex = fileName.lastIndexOf(File.separator);
// Check for backslash separator
if (fileName.lastIndexOf('\\') > lastSeparatorIndex)
lastSeparatorIndex = fileName.lastIndexOf('\\');
// Check for forward slash operator
if (fileName.lastIndexOf('/') > lastSeparatorIndex)
lastSeparatorIndex = fileName.lastIndexOf('/');
if (lastSeparatorIndex > 0){
fileName = fileName.substring(lastSeparatorIndex+1, fileName.length());
}
descriptor.setFileName(outputFolder + File.separator + fileName);
}
else findFileSinkTreeChildren(tree, act, outputFolder);
}
}
}
I'm trying to use the JPOS library to pack/unpack ISO8583-1987 messages.
I have a problem with the format, and i can't find any running example on the internet.
Could someone give me a running example of Unpacking a hexadecimal message, because there is a lot of examples with ASCII message, but this is not what i need.
Thank you all for your time & attention
Julien
I'm assuming you have the hex string representing the message in a String, in that case you have to convert it into a byte array.
For example assuming you have the string as an argument to your main. Anyhow you have to know the format of the iso message contained in that hex representation. For example if the message is binary you have to choose ISO87BPackager, if it is ascii you have to choose ISO87APackager.
import org.jpos.iso.packager.ISO87BPackager;
import org.jpos.iso.ISOException;
import org.jpos.iso.ISOMsg;
import org.jpos.iso.ISOUtil;
public class ParseISOMsg {
public static void main(String[] args) throws ISOException {
String hexmsg = args[0];
// convert hex string to byte array
byte[] bmsg =ISOUtil.hex2byte(hexmsg);
ISOMsg m = new ISOMsg();
// set packager, change ISO87BPackager for the matching one.
m.setPackager(new ISO87BPackager());
//unpack the message using the packager
m.unpack(bmsg);
//dump the message to standar output
m.dump(System.out, "");
}
}
For example if you call java -cp .:jpos.jar ParseISOMsg 080000200000008000001234563132333435363738 it should print:
<isomsg>
<!-- org.jpos.iso.packager.ISO87BPackager -->
<field id="0" value="0800"/>
<field id="11" value="123456"/>
<field id="41" value="12345678"/>
</isomsg>
I am trying to write some contents from one csv file to another csv file using BeanIO. I am able to get the contents but the header is not writing to destination file. I don know how to fix this. Please some one help me on this. Following is the code
StreamFactory factory = StreamFactory.newInstance();
factory.load("config" + File.separatorChar
+ CSVMain.prop.getProperty("ordersmapping"));
orderWriter = factory.createWriter("salesOrder", new File(property));
for (int i = 0; i < orders.size(); i++) {
orderWriter.write(orders.get(i));
}
orderWriter.flush();
orderWriter.close();
the code is written inside a method. And I also want to remove the carriage return(\r) from the output.
Thanks in advance.
I got the answer from the Google Groups thread which utilizes a class for the header and then sets the fields to ignore, basically overriding. I did not want to create a dedicated class so instead I re-used the map class as follows:
<stream name="XYZ" format="csv">
<parser>
<property name="alwaysQuote" value="true" />
</parser>
<record name="header" class="map" order="1" minOccurs="1" maxOccurs="1">
<field name="Name" default="Name" ignore="true"/>
<field name="Surname" default="Surname" ignore="
</record>
<record name="record" class="map" order="2">
<field name="Name"/>
<field name="Surname"/>
</record>
</stream>
You may use the this util method to easily create a Header without any additional class or XML configuration.
public static void main(String[] args) {
final String factoryName = "comma delimited csv factory";
final String headerName = "CarHeader";
final var builder = new StreamBuilder(factoryName)
.format("csv")
.addRecord(Headers.of(Car.class, headerName))
.addRecord(Car.class)
;
final var factory = StreamFactory.newInstance();
factory.define(builder);
final ByteArrayOutputStream bout = new ByteArrayOutputStream();
final BeanWriter writer = factory.createWriter(factoryName, new OutputStreamWriter(bout));
try {
writer.write(headerName, null);
writer.write(new Car("Ford Ka", 2016));
writer.write(new Car("Ford Fusion", 2020));
} finally {
writer.close();
}
System.out.println(bout.toString());
// Model,Year
// Ford Ka,2016
// Ford Fusion,2020
}
I'm using Solr 4.6 example's SimplePostTool to import documents from the filesystem to Solr. All it's ok, but the field last_modified is filled only when the original document has metadata for it. If the field is not present Solr extractor leaves the field blank.
I tried to modify SimplePostTool to set this field using the file system modification date, but then I get this error when I try to import files that already have last_modified field from the metadata:
430584 [qtp1214238505-16] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: ERROR:
[doc=4861976] multiple values encountered for non multiValued field
last_modified: [2013-12-22T14:03:10.000Z, 2013-07-02T11:29:20.000Z]
I'm thinking about using a custom field for file system date, but in my case, metadata date if preferable when is available. Is there any way to merge them at import time?
Thanks!
You can set a default value in your schema. Something like this should work:
<field name="my_date" type="date" indexed="true" stored="true" multiValued="false" default="NOW" />
Field Type Definition:
<fieldType name="date" class="solr.TrieDateField" sortMissingLast="true" omitNorms="true"/>
while creating a document the solr takes all input as text and then validates according to the given data type , Hence any form of valid date format accepted ,would work fine with the solr .
For current time
Any default value
regards
Rajat
I finally solved the issue creating a custom Update Request Processor, as explained here: http://wiki.apache.org/solr/UpdateRequestProcessor
My processor is as follows:
package com.mycompany.solr;
import java.io.IOException;
import org.apache.solr.common.SolrInputDocument;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.SolrQueryResponse;
import org.apache.solr.update.AddUpdateCommand;
import org.apache.solr.update.processor.UpdateRequestProcessor;
import org.apache.solr.update.processor.UpdateRequestProcessorFactory;
public class LastModifiedMergeProcessorFactory
extends UpdateRequestProcessorFactory {
#Override
public UpdateRequestProcessor getInstance(SolrQueryRequest req,
SolrQueryResponse rsp, UpdateRequestProcessor next) {
return new LastModifiedMergeProcessor(next);
}
}
class LastModifiedMergeProcessor extends UpdateRequestProcessor {
public LastModifiedMergeProcessor(UpdateRequestProcessor next) {
super(next);
}
#Override
public void processAdd(AddUpdateCommand cmd) throws IOException {
SolrInputDocument doc = cmd.getSolrInputDocument();
Object metaDate = doc.getFieldValue( "last_modified" );
Object fileDate = doc.getFieldValue( "file_date" );
if( metaDate == null && fileDate != null) {
doc.addField( "last_modified", fileDate );
}
// pass it up the chain
super.processAdd(cmd);
}
}
Where file_date is a field I set with the file modification date at import time.