BF CLI parsing error on exported QnA maker knowledge base - qnamaker

We need a fix related the the following parsing error when QnA maker KB are exported via BL CLI
We use the BF CLI (https://www.npmjs.com/package/#microsoft/botframework-cli) with the #next version to export QnA maker KB in *.qna format and then generate the snapshot for the BF orchestrator.
The command used to export the QnA maker KB is:
bf qnamaker:kb:export --out=.\cognitiveModels\q_IT.qna --kbId=a238d6ac-XXXX-YYY-9fdd-28e335030610 --subscriptionKey=d01228763e**** --qnaFormat
The command used to generate the snapshot is:
bf orchestrator:create --hierarchical --in ./dataSources --out ./generated --refresh
Generation of the snapshot fails due to a parsing error and displays the following error message
undefined
Failed to parse C:\GitHub-local\To-trash\orchestrator\dataSources\q_IT.qna
We identified that the parsing error comes from a specific line on the q_IT.qna file - Here is the original code and the modified version:
Orignal (automatically generated via the bf qnamaker:kb:export command)
**Prompts:**
- [What Should You Sync](#782)
- [Files that Cannot Sync
](#783)
- [Box Sync Status](#784)
Manual modification to fix the parsing issue:
**Prompts:**
- [What Should You Sync](#782)
- [Files that Cannot Sync](#783)
- [Box Sync Status](#784)
The problem is generated by the bf qna:bf:export cli command - We need a fix for that.
Here are screenshots with the content of QnA maker portal and the QnA pair associated to this parsing error.
QnA maker portal view

Related

Extract an attribute in GPKG

I am trying to extract rivers from OSM. I downloaded the waterway GPKG where I believe there are over 21 million entries (see link) with a file size of 19.9 GB.
I have tried using the split vector layer in QGIS, but it would crash.
Was thinking of using GDAL ogr2ogr, but having trouble generating the command line.
I first isolated the multiline string with the following command.
ogr2ogr -f gpkg water.gpkg waterway_EPSG4326.gpkg waterway_EPSG4326_line -nlt linestring
ogrinfo water.gpkg INFO: Open of water.gpkg' using driver GPKG' successful. 1: waterway_EPSG4326_line (Line String)
Tried the following command below, but it is not working.
ogr2ogr -f GPKG SELECT * FROM waterway_EPSG4326_line - where waterway="river" river.gpkg water.gpkg
Please let me know what is missing or if there is any easy way to perform the task. I tried opening the file in R sf package, but it would not load after a long time.
Thanks

How do I get Source Extractor to Analyze an Image?

I'm relatively inexperienced in coding, so right now I'm just familiarizing myself with the basics of how to use SE, which I'll need to use in the near future.
At the moment I'm trying to get it to analyze a FITS file on my computer (which is a Mac). I'm sure this is something obvious, but I haven't been able to get it do that. Following the instructions in Chapters 6 and 7 of Source Extractor for Dummies (linked below), I input the following:
sex MedSpiral_20deg_Serl2_.45_.fits.fits -c configuration_file.txt
And got the following error message:
WARNING: configuration_file.txt not found, using internal defaults
----- SExtractor 2.19.5 started on 2020-02-05 at 17:10:59 with 1 thread
Setting catalog parameters
ERROR: can't read default.param
I then tried entering parameters manually:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c configuration_file.txt -DETECT_TYPE CCD -MAG_ZEROPOINT 2.5 -PIXEL_SCALE 0 -SATUR_LEVEL 1 -SEEING_FWHM 1
And got the same error message. I tried referencing default.sex directly:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c default.sex
And got the same error message again, substituting "configuration_file.txt not found" with "default.sex not found" (I checked that default.sex was on my computer, it is). The same thing happened when I tried to use default.param.
Here's the link to SE for Dummies (Chapter 6 begins on page 19):
http://astroa.physics.metu.edu.tr/MANUALS/sextractor/Guide2source_extractor.pdf
If you run the command "sex MedSpiral_20deg_Ser12_.45_fits.fits -c default.sex" within the config folder (within the sextractor folder), you will be able to run it.
However, I wonder how I can possibly run sextractor command from any folder in my computer?

co-simulation dymola fmu file can't be simulated by fmuchecker

We are trying to test the co-simulation options of Dymola and created a fmu-file. We installed/built the FMILibrary-2.0b2 and FMUChecker-2.0b1 from www.fmi-standard.org.
I encountered an issue while trying to run the FMUChecker (fmuCheck.linux32) of a fmu-file my colleague created with Dymola. Wenn i create with my Dymola-license an fmu-file from the same Dymola model this issue is not reproducible. Because fmuCheck.linux32 runs fine without any error messages.
My colleague can run both files without problems!
As it is our goal to use this option for co-simulation i tried to run the fmu file on a pc without Dymola and again i got the same error with both my fmu-copy and the one my colleague created.
Here's the Error Message
fmuCheck.linux32 PemFcSysLib_Projects_Modl_SimCoolCirc.fmu
[INFO][FMUCHK] Will process FMU PemFcSysLib_Projects_Modl_SimCoolCirc.fmu
[INFO][FMILIB] XML specifies FMI standard version 1.0
[INFO][FMI1XML] Processing implementation element (co-simulation FMU detected)
[INFO][FMUCHK] Model name: PemFcSysLib.Projects.Modl.SimCoolCirc
[INFO][FMUCHK] Model identifier: PemFcSysLib_Projects_Modl_SimCoolCirc
[INFO][FMUCHK] Model GUID: {6eba096a-a778-4cf1-a7c2-3bd6121a1a52}
[INFO][FMUCHK] Model version:
[INFO][FMUCHK] FMU kind: CoSimulation_StandAlone
[INFO][FMUCHK] The FMU contains:
18 constants
1762 parameters
26 discrete variables
281 continuous variables
0 inputs
0 outputs
2087 internal variables
0 variables with causality 'none'
2053 real variables
0 integer variables
0 enumeration variables
34 boolean variables
0 string variables
[INFO][FMUCHK] Printing output file header
time
[INFO][FMILIB] Loading 'linux32' binary with 'standard32' platform types
[INFO][FMUCHK] Version returned from FMU: 1.0
[FMU][FMU status:OK]
...
[FMU][FMU status:OK]
[FMU][FMU status:Error] fmiInitialize: dsblock_ failed, QiErr = 1
[FMU][FMU status:Error] Unless otherwise indicated by error messages, possible errors are (non-exhaustive):
1. The license file was not found. Use the environment variable "DYMOLA_RUNTIME_LICENSE" t
[FATAL][FMUCHK] Failed to initialize FMU for simulation (FMU status: Error)
[FATAL][FMUCHK] Simulation loop terminated at time 0 since FMU returned status: Error
FMU check summary:
FMU reported:
2 warning(s) and error(s)
Checker reported:
0 Warning(s)
0 Error(s)
Fatal error occured during processing
I think a fmu-file shouldn't need a Dymola license to be simulated, therefore i can't see the reason this simulation failed.
What could be the reason for this strange behaviour?
Partially this is the same Error Message of this Issue
Initialization of a Dymola FMU in Simulink
Any suggestions are much appreciated. Thank you.
It seems that dymola has not set the path variable to the license-file in ubuntu. We have done this manually by adding the following lines to .bashrc
# Dymola runtime license, path
DYMOLA_RUNTIME_LICENSE=$HOME/.dynasim/dymola.lic
export DYMOLA_RUNTIME_LICENSE
now we can simulate each others fmu-files!
Whether an exported FMU requires a license depends on whether the copy of Dymola that exported the FMU had the "Binary Export" feature. The bottom line is that if you want unencumbered FMUs from Dymola, you have to pay for an extra licensed feature.

error message when trying to connect to Sybase using SSIS

I am using ssis of sql 2008 trying to connect to sybase 12 using sybase 15.2 driver, i even tried sybase 12 driver and got same error
error message
[ZZZZZ]
[Message Class: 16]
[Message State: 5]
[Transaction State: 1]
[Server Name: PHXPROD]
[Native Code: 2812]
[ASEOLEDB]Stored procedure 'sp_oledb_datatype_info' not found.
Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output). (ASEOLEDB)
attached images showing that i am able to connect but soon after connecting i get the error message
Step 1
Step 2
Error message when i used ADO.NET
Look like you have to investigate more about this problem and by reading in the SyBooks Online it say:
If error 2812 occurs on system stored procedures (as your case sp_oledb_tables and sp_oledb_datatype_info) it may be resolved by running the installmaster script, which installs all system procedures and initializes various other Adaptive Server structures.
How to run the installmaster script?
Using isql, run the new installmaster script included with this release by entering:
isql -Usa -P<sa password> -S<server name> -n -i$SYBASE/$SYBASE_ASE/scripts/installmaster
-o<output file>
Reference: Running the installmaster script
Hope it help you

Unexpected error while loading data

I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.