Odoo Sales Order import results in wrong product line items - odoo

When I import a Sales Order, the resulting Quotation displays the wrong product. Here is the process I followed
Import a new customer with the following csv file which works as expected:
id,property_account_payable_id/id,property_account_receivable_id/id,notify_email,active,company_id/id,company_type,name
TestCustomer,__export__.account_account_15,__export__.account_account_7,All Messages,TRUE,base.main_company,Company,Test Customer
Import a new product, which works as expected:
id,active,description,categ_id/id,name,price,type,uom_po_id/id,uom_id/id,list_price,state
ABCDEF123456,TRUE,ABCDEF123456 Description,product.product_category_all,ABCDEF123456 Name,123,consu,product.product_uom_unit,product.product_uom_unit,453.67,
Import a new sales order, which does NOT works as expected:
id,partner_id/id,order_line/product_id/id,order_line/product_uom_qty
SalesOrder123456,TestCustomer,ABCDEF123456,1
Here is the export of the result of importing the sales order csv above:
"id","partner_id/id","order_line/product_id/id","order_line/product_uom_qty"
"SalesOrder123456","TestCustomer","__export__.product_product_13639","1.0"
If I go into Settings > Sequences & Identifiers > External Identifiers and search for "13639", I get a result of a completely different product which was imported earlier.
Anybody have any idea what is going on here? This seems like a bug in the import process.

Instead of using csv for import,you can use pgadmin(postgresql gui tool).Just select your database name and in the top you can see sql enabled,click it and issue an sql query(you must know the table name which you want to retreive) after that you can export it and import in your desired DB.

Related

How to filter Socrata API dataset by multiple values for a single field?

I am attempting to create a CSV file using Python by reading from this specific api:
https://dev.socrata.com/foundry/data.cdc.gov/5jp2-pgaw
Where I'm running into trouble is that I would like to specify multiple values of "loc_admin_zip" to search for at once. For example, returning a CSV file where the zip is either "10001" or "10002". However, I can't figure out how to do this, I can only get it to work if "loc_admin_zip" is set to a single value. Any help would be appreciated. My code so far:
import pandas as pd
from sodapy import Socrata
client = Socrata("data.cdc.gov", None)
results = client.get("5jp2-pgaw",loc_admin_zip = 10002)
results_df = pd.DataFrame.from_records(results)
results_df.to_csv('test.csv')

Error: Not found: Dataset my-project-name:domain_public was not found in location US

I need to make a query for a dataset provided by a public project. I created my own project and added their dataset to my project. There is a table named: domain_public. When I make query to this table I get this error:
Query Failed
Error: Not found: Dataset my-project-name:domain_public was not found in location US
Job ID: my-project-name:US.bquijob_xxxx
I am from non-US country. What is the issue and how to fix it please?
EDIT 1:
I change the processing location to asia-northeast1 (I am based in Singapore) but the same error:
Error: Not found: Dataset censys-my-projectname:domain_public was not found in location asia-northeast1
Here is a view of my project and the public project censys-io:
Please advise.
EDIT 2:
The query I used to type is based on censys tutorial is:
#standardsql
SELECT domain, alexa_rank
FROM domain_public.current
WHERE p443.https.tls.cipher_suite = 'some_cipher_suite_goes_here';
When I changed the FROM clause to:
FROM `censys-io.domain_public.current`
And the last line to:
WHERE p443.https.tls.cipher_suite.name = 'some_cipher_suite_goes_here';
It worked. Shall I understand that I should always include the projectname.dataset.table (if I'm using the correct terms) and point the typo the Censys? Or is this special case to this project for some reason?
BigQuery can't find your data
How to fix it
Make sure your FROM location contains 3 parts
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
Like so
`bigquery-public-data.hacker_news.stories`
*note the backticks
Examples
Wrong
SELECT *
FROM `stories`
Wrong
SELECT *
FROM `hacker_news.stories`
Correct
SELECT *
FROM `bigquery-public-data.hacker_news.stories`
In Web UI - click Show Options button and than select your location for "Processing Location"!
Specify the location in which the query will execute. Queries that run in a specific location may only reference data in that location. For data in US/EU, you may choose Unspecified to run the query in the location where the data resides. For data in other locations, you must specify the query location explicitly.
Update
As it stated above - Queries that run in a specific location may only reference data in that location
Assuming that censys-io.domain_public dataset has its data in US - you need to specify US for Processing Location
The problem turned out to be due to wrong table name in the FROM clause.
The right FROM clause should be:
FROM `censys-io.domain_public.current`
While I was typing:
FROM domain_public.current
So the project name is required in the FROM and `` are required because of - in the project name.
Make sure your FROM location contains 3 parts as #stevec mentioned
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
But in my case, I was using the LegacySql within the Google script editor, so in that case you need to state that to false, for example:
var projectId = 'xxxxxxx';
var request = {
query: 'select * from project.database.table',
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
check exact case [upper or lower] and spelling of table or view name.
copy it from table definition and your problem will be solved.
i was using FPL009_Year_Categorization instead of FPL009_Year_categorization
using c as C and getting the error "not found in location asia-south1"
I copied with exact case and problem is resolved.
On your Big Query console, go to the Data Explorer on the left pane, click the small three dots, then select query option from the list. This step confirms you choose the correct project and dataset. Then you can edit the query on the query pane on the right.
may be dataset name changed in create dataset option. it should be US or default location
enter image description here

django and row sql

What is the best way to import sql to python code:
I use django, but sometimes I write row sql, and sometimes this code is big. My question is how can I import examle.sql to my code.py
example.sql:
SELECT id FROM users_user
code.py:
row_sql = <this string>
user_ids = RowSqlManager(row_sql).execute()
if I name my example.sql with name example.py it is easy, but I want separate python code and sql. Is there to do it? Or it is better to rename example.sql to example.py

UPDATE query produces no change

I wrote the following UPDATE statement:
UPDATE print_archive
SET pages = pages*2
WHERE printer ILIKE '%plot%' AND paper_size = 'Arch D';
It will look in a table named print_archive for any printers with "plot" in their name and the paper size 'Arch D'.
If it finds any it is suppose to multiply the page count by 2.
Below is a sample of the print_archive table data -
(column names)
"name_id","printer","pages","source","date","file_name","duplex","paper_size"
Sample Data:
jane, \\PRINTSRV\plot9, 1, \\COMP-01, 01/21/2017 14:30:39, hello_world.pdf, No, Arch D,
billy, \\PRINTSRV\Plot13, 1, \\COMP-02, 02/20/2016 10:37:23, bye_world.doc, No, Arch D,
But no matter what I change or how many times I run the UPDATE statement it always returns 0.
What am I doing wrong?
So initially I had added the latin1 encoding when importing the CSV file as the import process kept producing encoding error messages when using the default encode. I switched the import of the CSV file to 'latin2' and that worked better. There does seem to be a leading space as well in some of the fields so adding '%' helped as well (though this was not working in latin1). pgAdmin4 (which is how my users will interact) handles latin2 pretty good. Though from psql cmd I need to set the client encoding to latin2 before it can work (SET client_encoding to 'latin2';)
Thanks guys. All your recommendations and leads helped me.

In Hybris , How we can import product from database and store the product attribute value in csv file using Beanshell

I want to import product with attribute value Price , description and store in csv file using Beanshell script .
You can use export in Hac using a specific impex header.
I don't know the requirement but it's not proper to do it using beanshell since we have and import export framework.
You might have a look DefaultExportService. It can be used in a shell Script.
You could restrict your search results to return only a certain catalog/version(use in Hac/hMC):
$catalog=YourCatalogId
$version=YourCatalogVersion
"#% import de.hybris.platform.jalo.product.Product;"
"#% impex.setTargetFile( ""Products_and_price.csv"", true, 1, -1 );"
INSERT_UPDATE Product;code[unique=true];description[lang=en];description[lang=de];europe1Prices(price,currency(isoCode))
"#% impex.exportItems("" SELECT {p:pk} FROM {Product as p JOIN CatalogVersion as cv ON {cv:PK}={p:catalogVersion} JOIN catalog as c ON {c:pk}={cv:catalog}} WHERE {c:id}='$catalog' AND {cv:version}='$version'"", Collections.EMPTY_MAP, Collections.singletonList( Product.class ), true, true, -1, -1 );"
Add more languages for description if needed. Products are linked to their store through their catalog. You could search for this relation (catalog-store) on a new line, I'm not sure how to display this in one line.