In Hybris , How we can import product from database and store the product attribute value in csv file using Beanshell - beanshell

I want to import product with attribute value Price , description and store in csv file using Beanshell script .

You can use export in Hac using a specific impex header.
I don't know the requirement but it's not proper to do it using beanshell since we have and import export framework.
You might have a look DefaultExportService. It can be used in a shell Script.

You could restrict your search results to return only a certain catalog/version(use in Hac/hMC):
$catalog=YourCatalogId
$version=YourCatalogVersion
"#% import de.hybris.platform.jalo.product.Product;"
"#% impex.setTargetFile( ""Products_and_price.csv"", true, 1, -1 );"
INSERT_UPDATE Product;code[unique=true];description[lang=en];description[lang=de];europe1Prices(price,currency(isoCode))
"#% impex.exportItems("" SELECT {p:pk} FROM {Product as p JOIN CatalogVersion as cv ON {cv:PK}={p:catalogVersion} JOIN catalog as c ON {c:pk}={cv:catalog}} WHERE {c:id}='$catalog' AND {cv:version}='$version'"", Collections.EMPTY_MAP, Collections.singletonList( Product.class ), true, true, -1, -1 );"
Add more languages for description if needed. Products are linked to their store through their catalog. You could search for this relation (catalog-store) on a new line, I'm not sure how to display this in one line.

Related

How to filter Socrata API dataset by multiple values for a single field?

I am attempting to create a CSV file using Python by reading from this specific api:
https://dev.socrata.com/foundry/data.cdc.gov/5jp2-pgaw
Where I'm running into trouble is that I would like to specify multiple values of "loc_admin_zip" to search for at once. For example, returning a CSV file where the zip is either "10001" or "10002". However, I can't figure out how to do this, I can only get it to work if "loc_admin_zip" is set to a single value. Any help would be appreciated. My code so far:
import pandas as pd
from sodapy import Socrata
client = Socrata("data.cdc.gov", None)
results = client.get("5jp2-pgaw",loc_admin_zip = 10002)
results_df = pd.DataFrame.from_records(results)
results_df.to_csv('test.csv')

Exporting MDX query result to CSV file using ADOM library

In my code, I am using ADOM library to export MDX query result to CSV/xlsx file. It exports the data successfully in CSV/xlsx file format. But there is an additional tag All-M or All-L appended with Row and Column while there is no tag in SSAS (export the data from same ADOM to CSV).
I tried to remove All Level check box, remove (All) Level name and (All) Member name.
Export from icCube containing additionals tags All-L and All-M:
[Stress Scenarios].[Stress Scenario].[All-L].[MEMBER_CAPTION]
[Snapshots].[Reference Date].[All-M].&[2018-01-02].[Measures].[Market Value Sensitivity]
Export from SSAS :
[Stress Scenarios].[Stress Scenario].[MEMBER_CAPTION]
[Snapshots].[Reference Date].&[2018-01-02T00:00:00].[Measures].[Market Value]
I need an export output from icCube as below
Export from icCube:
[Stress Scenarios].[Stress Scenario].[MEMBER_CAPTION]
[Snapshots].[Reference Date].&[2018-01-02T00:00:00].[Measures].[Market Value]

django and row sql

What is the best way to import sql to python code:
I use django, but sometimes I write row sql, and sometimes this code is big. My question is how can I import examle.sql to my code.py
example.sql:
SELECT id FROM users_user
code.py:
row_sql = <this string>
user_ids = RowSqlManager(row_sql).execute()
if I name my example.sql with name example.py it is easy, but I want separate python code and sql. Is there to do it? Or it is better to rename example.sql to example.py

Odoo Sales Order import results in wrong product line items

When I import a Sales Order, the resulting Quotation displays the wrong product. Here is the process I followed
Import a new customer with the following csv file which works as expected:
id,property_account_payable_id/id,property_account_receivable_id/id,notify_email,active,company_id/id,company_type,name
TestCustomer,__export__.account_account_15,__export__.account_account_7,All Messages,TRUE,base.main_company,Company,Test Customer
Import a new product, which works as expected:
id,active,description,categ_id/id,name,price,type,uom_po_id/id,uom_id/id,list_price,state
ABCDEF123456,TRUE,ABCDEF123456 Description,product.product_category_all,ABCDEF123456 Name,123,consu,product.product_uom_unit,product.product_uom_unit,453.67,
Import a new sales order, which does NOT works as expected:
id,partner_id/id,order_line/product_id/id,order_line/product_uom_qty
SalesOrder123456,TestCustomer,ABCDEF123456,1
Here is the export of the result of importing the sales order csv above:
"id","partner_id/id","order_line/product_id/id","order_line/product_uom_qty"
"SalesOrder123456","TestCustomer","__export__.product_product_13639","1.0"
If I go into Settings > Sequences & Identifiers > External Identifiers and search for "13639", I get a result of a completely different product which was imported earlier.
Anybody have any idea what is going on here? This seems like a bug in the import process.
Instead of using csv for import,you can use pgadmin(postgresql gui tool).Just select your database name and in the top you can see sql enabled,click it and issue an sql query(you must know the table name which you want to retreive) after that you can export it and import in your desired DB.

UUID to NUUID in Python

In my application, I get some values from MSSQL using PyMSSQL. Python interpret one of this values as UUID. I assigned this value to a variable called id. When I do
print (type(id),id)
I get
<class 'uuid.UUID'> cc26ce03-b6cb-4a90-9d0b-395c313fc968
Everything is as expected so far. Now, I need to make a query in MongoDb using this id. But the type of my field in MongoDb is ".NET UUID(Legacy)", which is NUUID. But I don't get any result when I query with
client.db.collectionname.find_one({"_id" : id})
This is because I need to convert UUID to NUUID.
Note: I also tried
client.db.collectionname.find_one({"_id" : NUUID("cc26ce03-b6cb-4a90-9d0b-395c313fc968")})
But it didn't work. Any ideas?
Assuming you are using PyMongo 3.x:
from bson.binary import CSHARP_LEGACY
from bson.codec_options import CodecOptions
options = CodecOptions(uuid_representation=CSHARP_LEGACY)
coll = client.db.get_collection('collectionname', options)
coll.find_one({"_id": id})
If instead you are using PyMongo 2.x
from bson.binary import CSHARP_LEGACY
coll = client.db.collectionname
coll.uuid_subtype = CSHARP_LEGACY
coll.find_one({"_id": id})
You have to tell PyMongo what format the UUID was originally stored in. There is also a JAVA_LEGACY representation.