using assimp to load fbx animation data - fbx

Recently I just read Introduction to 3D Game Programming with Directx 11,and i trying to using the example in chapter 25 to load my own model.So after I skin a model and export data in .fbx format,I try to load bone and animation data and get wrong results.So,
1.I just copy the offset matrix to my own structure and transpose it,is this ok?
for (auto & s: names)
{
auto index = maps[s];
auto m = matrixmaps[s];
memcpy(offset.float1d, m, sizeof(float)* 16);
TransposeMatrix(offset);
Offsets[index] = offset;
}
2.I found that the root bone(Bip001) has zero key frame,so I check node hierarchy of the data assimp loaded,and I notice that the bone hierarchy is just as below:
|_RootNode
|_Bip001_$AssimpFbx$_Translation
|_Bip001_$AssimpFbx$_PreRotation
|_Bip001_$AssimpFbx$_Rotation
|_Bip001_$AssimpFbx$_Scaling
|_Bip001
|_Bip001 Footsteps
|_Bip001 Pelvis
|_Bip001 Spine
|_Bip001 Spine1
| |_Bip001 Spine2
| |_Bip001 Spine3
| |_Bip001 Neck
| |_Bip001 L Clavicle
| | |_Bip001 L UpperArm
| | |_Bip001 L Forearm
| | |_Bip001 L Hand
| | |_Bip001 L Finger0
| | |_Bip001 L Finger0Nub
| |_Bip001 R Clavicle
| | |_Bip001 R UpperArm
| | |_Bip001 R Forearm
| | |_Bip001 R Hand
| | |_Bip001 R Finger0
| | |_Bip001 R Finger0Nub
| |_Bip001 Head
| |_Bip001 HeadNub
|_Bip001 L Thigh
| |_Bip001 L Calf
| |_Bip001 L Foot
| |_Bip001 L Toe0
| |_Bip001 L Toe01
| |_Bip001 L Toe02
| |_Bip001 L Toe0Nub
|_Bip001 R Thigh
|_Bip001 R Calf
|_Bip001 R Foot
|_Bip001 R Toe0
|_Bip001 R Toe01
|_Bip001 R Toe02
|_Bip001 R Toe0Nub
How should I deal with the parent of node Bip001,should I copy positionkeys,rotationkeys and scalingkeys from these node to Bip001?
Please help me figure it out.Thank you very much.

For the second problem, setting AI_CONFIG_IMPORT_FBX_PRESERVE_PIVOTS to false works for me.

Related

Join/Add data to MultiIndex dataframe in pandas

I have some measurement data from different dust analytics.
Two Locations MC174 and MC042
Two fractions PM2.5 and PM10
several analytic results [Cl,Na, K,...]
I created a multicolumn dataframe like this:
| MC174 | MC042 |
| PM2.5 | PM10 | PM2.4 | PM10 |
| Cl | Na| K | Cl | Na| K | Cl | Na| K | Cl | Na| K |
location = ['MC174','MC042']
fraction = ['PM10','PM2.5']
value = [ 'date' ,'Cl','NO3', 'SO4','Na', 'NH4','K', 'Mg','Ca', 'masse','OC_R', 'E_CR','OC_T', 'EC_T']
midx = pd.MultiIndex.from_product([location, fraction,value],names=['location','fraction','value'])
df = pd.DataFrame(columns=midx)
df
and i prepared 4 Dataframes with matching colums for those four locations and fractions.
date | Cl | Na | K |
______________________________
01-01-2021 | 3.1 | 4.3 | 1.0|
... ...
31-12-2021 | 4.9 | 3.8 | 0.8
Now i want to fill the large dataframe with the data from the four locations/fractions:
DF1 -> MainDF[MC174][PM10]
DF2 -> MainDF[MC174][PM2.5]
and so on...
My goal is to have one dataframe with the dates of the year in its index and the multilevel columnstructure i discribed at the top and all the data inside it.
I tried:
main_df['MC174']['PM10'].append(data_MC174_PM10)
pd.concat([main_df['MC174']['PM10'], data_MC174_PM10],axis=0)
main_df.loc[:,['MC174'],['PM10']] = data_MC174_PM10
but the dataframe is never filled.
Thanks in advance!

How to pivot columns so they turn into rows using PySpark or pandas?

I have a dataframe that looks like the one bellow, but with hundreds of rows. I need to pivot it, so that each column after Region would be a row, like the other table bellow.
+--------------+----------+---------------------+----------+------------------+------------------+-----------------+
|city |city_tier | city_classification | Region | Jan-2022-orders | Feb-2022-orders | Mar-2022-orders|
+--------------+----------+---------------------+----------+------------------+------------------+-----------------+
|new york | large | alpha | NE | 100000 |195000 | 237000 |
|los angeles | large | alpha | W | 330000 |400000 | 580000 |
I need to pivot it using PySpark, so I end up with something like this:
+--------------+----------+---------------------+----------+-----------+---------+
|city |city_tier | city_classification | Region | month | orders |
+--------------+----------+---------------------+----------+-----------+---------+
|new york | large | alpha | NE | Jan-2022 | 100000 |
|new york | large | alpha | NE | Fev-2022 | 195000 |
|new york | large | alpha | NE | Mar-2022 | 237000 |
|los angeles | large | alpha | W | Jan-2022 | 330000 |
|los angeles | large | alpha | W | Fev-2022 | 400000 |
|los angeles | large | alpha | W | Mar-2022 | 580000 |
P.S.: A solution using pandas would work too.
In pandas :
df.melt(df.columns[:4], var_name = 'month', value_name = 'orders')
city city_tier city_classification Region month orders
0 york large alpha NE Jan-2022-orders 100000
1 angeles large alpha W Jan-2022-orders 330000
2 york large alpha NE Feb-2022-orders 195000
3 angeles large alpha W Feb-2022-orders 400000
4 york large alpha NE Mar-2022-orders 237000
5 angeles large alpha W Mar-2022-orders 580000
or even
df.melt(['city', 'city_tier', 'city_classification', 'Region'],
var_name = 'month', value_name = 'orders')
city city_tier city_classification Region month orders
0 york large alpha NE Jan-2022-orders 100000
1 angeles large alpha W Jan-2022-orders 330000
2 york large alpha NE Feb-2022-orders 195000
3 angeles large alpha W Feb-2022-orders 400000
4 york large alpha NE Mar-2022-orders 237000
5 angeles large alpha W Mar-2022-orders 580000
In PySpark, your current example:
from pyspark.sql import functions as F
df = spark.createDataFrame(
[('new york', 'large', 'alpha', 'NE', 100000, 195000, 237000),
('los angeles', 'large', 'alpha', 'W', 330000, 400000, 580000)],
['city', 'city_tier', 'city_classification', 'Region', 'Jan-2022-orders', 'Feb-2022-orders', 'Mar-2022-orders']
)
df2 = df.select(
'city', 'city_tier', 'city_classification', 'Region',
F.expr("stack(3, 'Jan-2022', `Jan-2022-orders`, 'Fev-2022', `Feb-2022-orders`, 'Mar-2022', `Mar-2022-orders`) as (month, orders)")
)
df2.show()
# +-----------+---------+-------------------+------+--------+------+
# | city|city_tier|city_classification|Region| month|orders|
# +-----------+---------+-------------------+------+--------+------+
# | new york| large| alpha| NE|Jan-2022|100000|
# | new york| large| alpha| NE|Fev-2022|195000|
# | new york| large| alpha| NE|Mar-2022|237000|
# |los angeles| large| alpha| W|Jan-2022|330000|
# |los angeles| large| alpha| W|Fev-2022|400000|
# |los angeles| large| alpha| W|Mar-2022|580000|
# +-----------+---------+-------------------+------+--------+------+
The function which enables it is stack. It does not have a dataframe API, so you need to use expr to access it.
BTW, this is not pivoting, it's the opposite - unpivoting.

How can I optimise this webscraping code for iterative loop?

This code scrapes www.oddsportal.com for all the URLs provided in the code and appends it to a dataframe.
I am not very well versed with iterative logic hence I am finding it difficult to improvise on it.
Code:
import pandas as pd
from selenium import webdriver
from bs4 import BeautifulSoup as bs
browser = webdriver.Chrome()
class GameData:
def __init__(self):
self.date = []
self.time = []
self.game = []
self.score = []
self.home_odds = []
self.draw_odds = []
self.away_odds = []
self.country = []
self.league = []
def parse_data(url):
browser.get(url)
df = pd.read_html(browser.page_source, header=0)[0]
html = browser.page_source
soup = bs(html, "lxml")
cont = soup.find('div', {'id': 'wrap'})
content = cont.find('div', {'id': 'col-content'})
content = content.find('table', {'class': 'table-main'}, {'id': 'tournamentTable'})
main = content.find('th', {'class': 'first2 tl'})
if main is None:
return None
count = main.findAll('a')
country = count[1].text
league = count[2].text
game_data = GameData()
game_date = None
for row in df.itertuples():
if not isinstance(row[1], str):
continue
elif ':' not in row[1]:
game_date = row[1].split('-')[0]
continue
game_data.date.append(game_date)
game_data.time.append(row[1])
game_data.game.append(row[2])
game_data.score.append(row[3])
game_data.home_odds.append(row[4])
game_data.draw_odds.append(row[5])
game_data.away_odds.append(row[6])
game_data.country.append(country)
game_data.league.append(league)
return game_data
urls = {
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/1",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/2",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/3",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/4",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/5",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/6",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/7",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/8",
"https://www.oddsportal.com/soccer/england/premier-league/results/#/page/9",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/1",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/2",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/3",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/4",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/5",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/6",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/7",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/8",
"https://www.oddsportal.com/soccer/england/premier-league-2019-2020/results/#/page/9",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/1",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/2",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/3",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/4",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/5",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/6",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/7",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/8",
"https://www.oddsportal.com/soccer/england/premier-league-2018-2019/results/#/page/9",
}
if __name__ == '__main__':
results = None
for url in urls:
game_data = parse_data(url)
if game_data is None:
continue
result = pd.DataFrame(game_data.__dict__)
if results is None:
results = result
else:
results = results.append(result, ignore_index=True)
print(results)
| | date | time | game | score | home_odds | draw_odds | away_odds | country | league |
|-----|-------------|--------|----------------------------------|---------|-------------|-------------|-------------|-----------|--------------------------|
| 0 | 12 May 2019 | 14:00 | Brighton - Manchester City | 1:4 | 14.95 | 7.75 | 1.2 | England | Premier League 2018/2019 |
| 1 | 12 May 2019 | 14:00 | Burnley - Arsenal | 1:3 | 2.54 | 3.65 | 2.75 | England | Premier League 2018/2019 |
| 2 | 12 May 2019 | 14:00 | Crystal Palace - Bournemouth | 5:3 | 1.77 | 4.32 | 4.22 | England | Premier League 2018/2019 |
| 3 | 12 May 2019 | 14:00 | Fulham - Newcastle | 0:4 | 2.45 | 3.55 | 2.92 | England | Premier League 2018/2019 |
| 4 | 12 May 2019 | 14:00 | Leicester - Chelsea | 0:0 | 2.41 | 3.65 | 2.91 | England | Premier League 2018/2019 |
| 5 | 12 May 2019 | 14:00 | Liverpool - Wolves | 2:0 | 1.31 | 5.84 | 10.08 | England | Premier League 2018/2019 |
| 6 | 12 May 2019 | 14:00 | Manchester Utd - Cardiff | 0:2 | 1.3 | 6.09 | 9.78 | England | Premier League 2018/2019 |
As you can see the URLs can be optimised to run through all pages in that league/branch
inspect element
If I can accomodate this code to run iteratively for every page as in this inspect element below:
Inspect Element
That can be really useful and helpful.
How can I optimise this code to iteratively run for every page?

SQL - Converting similar data while keeping different data

I will try to explain this as detailed as I can if the details are insufficient please help edit my question or inquire about the lacking details for me to add in.
Problem Description
I am required to write a SELECT Statement to convert the data within ORDERED_BY from the REQUESTED_AUTHORS table into AUTHOR_NAME data. For example, JJ as shown in ORDERED_BY must be converted into Jack Johnson as shown in AUTHOR_NAME. Therefore the end results will be Jack Johnson instead of JJ. Below shows my 2 tables:
REQUESTED_AUTHORS
+-----------+
| ORDERED_BY|
+-----------+
| JJ |
+-----------+
| AB |
+-----------+
| JonJey |
+-----------+
| Admin |
+-----------+
| Tech Assit|
+-----------+
| Dr.Ob |
+-----------+
| EL |
+-----------+
| TA |
+-----------+
| JD |
+-----------+
| ET |
+-----------+
AUTHOR_LIST
+----------------+---------------------+
| ORDER_INITIAL | AUTHOR_NAME |
+----------------+---------------------+
| JJ | Jack Johnson |
+----------------+---------------------+
| AB | Albert Bently |
+----------------+---------------------+
| AlecBor | Alec Baldwin |
+----------------+---------------------+
| KingSt | KingSton |
+----------------+---------------------+
| GaryNort | Gary Norton |
+----------------+---------------------+
| Prof.Li | Professor Li |
+----------------+---------------------+
| EL | Elton Langsey |
+----------------+---------------------+
| TA | Thomas Alecson |
+----------------+---------------------+
| JD | Johnny Depp |
+----------------+---------------------+
| ET | Elson Tarese |
+----------------+---------------------+
Solution Tried (1)
SELECT ru.*, al.AUTHOR_NAME
FROM REQUESTED_AUTHORS ru, AUTHOR_LIST al
WHERE al.ORDER_INITIAL = ru.ORDERED_BY;
But this did not work as I intended it to, as there are different data in both ORDERED_BY and ORDER_INITIAL. I tried using DECODE function in order to convert it but I am stuck there.
Solution Tried (2)
SELECT ru.ORDERED_BY,
al.ORDER_INITIAL,
DECODE(ru.ORDERED_BY, (ru.ORDERED_BY != al.ORDER_INITIAL), ru.ORDERED_BY,
(ru.ORDERED_BY = al.ORDER_INITIAL), al.AUTHOR_NAME)results
FROM REQUESTED_AUTHORS ru, AUTHOR_LIST al;
What I intend on doing is changing the data with are similar to the other but keep the different data as how they are.
Meaning that the data as shown below are to be kept the same and not converted as there is nothing for it to convert to.
+-----------+
| ORDERED_BY|
+-----------+
| JonJey |
+-----------+
| Admin |
+-----------+
| Tech Assit|
+-----------+
| Dr.Ob |
+-----------+
My Question:
How may I write a query to convert the similar data and keep the different data?
You need an Outer Join (another reason to avoid old-style joins):
SELECT ru.*,
-- if there's a match return AUTHOR_NAME, otherwise keep ORDERED_BY
COALESCE(al.AUTHOR_NAME, ru.ORDERED_BY)
FROM REQUESTED_AUTHORS ru
LEFT JOIN AUTHOR_LIST al
ON al.ORDER_INITIAL = ru.ORDERED_BY;
Use left outer join here
SELECT ru.*, nvl( al.AUTHOR_NAME , ru.ordered_by)
FROM REQUESTED_AUTHORS ru, AUTHOR_LIST al
WHERE ru.ORDERED_BY = al.ORDER_INITIAL(+);

sphinx using JSGF grammar with weights: java.lang.NullPointerException exception

I am using the following grammar:
#JSGF V1.0;
grammar tag;
public <tag> = <tagPart> +;
<tagPart> = <digit> | <letter>;
<digit> = oh | zero | one | two | three | four | five | six |seven | eight | nine ;
<letter> = a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z ;
Everything works well unless I add weights. Running with weights:
<tagPart> = /0.8/ <digit> | /0.1/ <letter>;
I am getting the following error:
Exception in thread "main" java.lang.NullPointerException
at edu.cmu.sphinx.jsgf.JSGFGrammar.getNormalizedWeights(JSGFGrammar.java:49)
The way I am using grammar is:
Configuration configuration = new Configuration();
configuration.setAcousticModelPath("file:/E/sphinx4-5prealpha-src/sphinx4-data/src/main/resources/edu/cmu/sphinx/models/en-us/en-us");
configuration.setDictionaryPath("file:/E/sphinx4-5prealpha-src/sphinx4-data/src/main/resources/edu/cmu/sphinx/models/en-us/cmudict-en-us.dict");
configuration.setGrammarPath("file:/E/sT/src/main/resources/");
configuration.setGrammarName("tag");
configuration.setUseGrammar(true);
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
I'm sorry for delay, this issue has been just fixed in trunk in revision 13217, please update and try again, it should work.