How to replace text with a random number in SQL? - sql

I have HeidiSQL and I have a 3MB query I need to run but I would like to replace certain text/values with a random number. How would I do that?
and I need to replace it, because there's a lot of rows.
INSERT INTO creature (`guid`, `id`, `map`, `zoneId`, `areaId`, `spawnMask`, `PhaseId`, `PhaseGroup`, `modelid`, `equipment_id`, `position_x`, `position_y`, `position_z`, `orientation`, `spawntimesecs`, `spawndist`, `currentwaypoint`, `curhealth`, `curmana`, `MovementType`, `npcflag`, `unit_flags`, `dynamicflags`, `VerifiedBuild`) VALUES
('#CGUID+0', 83855, 1116, 0, 0, 3, '0', 0, 0, 0, 1504.222, -2147.853, 90.73972, 0.6455684, 7200, 10, 0, 0, 0, 1, 0, 0, 0, 21463), -- 83855 (Area: 7120) (possible waypoints or random movement)
('#CGUID+1', 81244, 1116, 0, 0, 3, '0', 0, 0, 0, 1514.845, -2106.458, 92.60474, 2.908402, 7200, 0, 0, 0, 0, 0, 0, 0, 0, 21463), -- 81244 (Area: 7120) (Auras: 163908 - 163908)
('#CGUID+2', 81244, 1116, 0, 0, 3, '0', 0, 0, 0, 1484.29, -2122.714, 92.58028, 1.293478, 7200, 0, 0, 0, 0, 0, 0, 0, 0, 21463), -- 81244 (Area: 7120) (Auras: 163908 - 163908)
So basically I want to replace just where #CGUID+ is and just add a random number (preferabbly between 1-999999). Therefore it would add onto the current number to be like:
'4820940'
'2850331'
'2854962'
Note every last digit isn't changing, it's just the first text that's being replaced. But I need it to be random.

If you are using MySQL or SQLServer, a way to do that would be like this:
SELECT Replace(real_field1,'value to be replaced','new value') as scrambled_field1
FROM <any table>
WHERE <filter>
If you show your query, I could help you out.
Based on your SQL command, you could try something like this. I'm simplifying your query just to show syntax:
insert into table(column1) values (REPLACE('#CGUID+1','#CGUID',FLOOR(RAND()*(99999-1)+1))

Run the INSERT INTO statement as you have shown in your question.
Afterwards you run the following query:
UPDATE creature
SET guid = REPLACE(guid, '#CGUID+', ROUND((RAND() * (999999-1))+1))
WHERE guid LIKE '#CGUID+%'

Related

Issues with OFDM transmitter and reciever in GNUradio

I am having some issues with GNUradio when trying to use OFDM transmitter and reciever. I am vaguely following the example on the WiKi here is my flow chart:
I am struggling to get the correct values for the OFDM modules. I have tried multiple values for the Occupied Carriers and Sync Word.
Current values:
Occupied Carriers: (list(range(-26, -21)) + list(range(-20, -7)) + list(range(-6, 0)) + list(range(1, 7)) + list(range(8, 21)) + list(range(22, 27)),)
Sync Word 1&2: (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
Pilot Carrier: ((-21, -7, 7, 21,),)
This program is not working and it gives the following error:
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. gnuradio.digital.digital_python.ofdm_carrier_allocator_cvc(fft_len: int, occupied_carriers: List[List[int]], pilot_carriers: List[List[int]], pilot_symbols: List[List[complex]], sync_words: List[List[complex]], len_tag_key: str = 'packet_len', output_is_shifted: bool = True)
Invoked with: 64; kwargs: occupied_carriers=([-26, -25, -24, -23, -22, -20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -6, -5, -4, -3, -2, -1, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 22, 23, 24, 25, 26],), pilot_carriers=((-21, -7, 7, 21),), pilot_symbols=(1, 1, 1, -1), sync_words=[(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], len_tag_key='packet_len'
This is new territory for me so please ask if I can make any clarifications

How to print a specific information from value_count()?

import pandas as pd
data = {'qtd': [0, 1, 4, 0, 1, 3, 1, 3, 0, 0,
3, 1, 3, 0, 1, 1, 0, 0, 1, 3,
0, 1, 0, 0, 1, 0, 1, 0, 0, 1,
0, 1, 1, 1, 1, 3, 0, 3, 0, 0,
2, 0, 0, 2, 0, 0, 2, 0, 0, 2,
0, 2, 0, 0, 2, 0, 0, 2, 0, 0,
2, 0, 0, 2, 0, 0, 2, 0, 0, 1,
1, 1, 1, 1, 0, 1, 0, 1, 0, 1,
0, 1, 0, 1, 0, 1, 0, 1, 1, 1,
1, 1, 1, 1, 1]
}
df = pd.DataFrame (data, columns = ['qtd'])
Counting
df['qtd'].value_counts()
0 43
1 34
2 10
3 7
4 1
Name: qtd, dtype: int64
What I want is to print a phrase: "The total with zero occurrencies is 43"
Tried with .head(1) but shows more than I want.
Does this solve your problem? The [0] indicates the index you wish to print, in this case the very first occurrence in your column of a data frame.
print('The total with zero occurences is:', df['qtd'].value_counts()[0])
The output of the code above will be:
The total with zero occurences is: 43
I am not sure if you want this but may be helpful:
import inflect
e = inflect.engine()
(df['qtd'].map(e.number_to_words).radd("The total with ").add(" occurances is ")
.value_counts().astype(str).reset_index().agg(':'.join,1))
0 The total with zero occurances is :43
1 The total with one occurances is :34
2 The total with two occurances is :10
3 The total with three occurances is :7
4 The total with four occurances is :1
dtype: object

Fill values in numpy array that are between a certain value

Let's say I have an array that looks like this:
a = np.array([0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0])
I want to fill the values that are between 1's with 1's.
So this would be the desired output:
a = np.array([0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])
I have taken a look into this answer, which yields the following:
array([0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1])
I am sure this answer is really close to the output I want. However, although tried countless times, I can't change this code into making it work the way I want, as I am not that proficient with numpy arrays.
Any help is much appreciated!
Try this
b = ((a == 1).cumsum() % 2) | a
Out[10]:
array([0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0], dtype=int32)
From #Paul Panzer: use ufunc.accumulate with bitwise_xor
b = np.bitwise_xor.accumulate(a)|a
Try this:
import numpy as np
num_lst = np.array(
[0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0])
i = 0
while i < len(num_lst): # Iterate through the list
if num_lst[i]: # Check if element is 1 at i-th position
if not num_lst[i+1]: # Check if next element is 0
num_lst[i+1] = 1 # Change next element to 1
i += 1 # Continue through loop
else: # Check if next element is 1
i += 2 # Skip next element
else:
i += 1 # Continue through loop
print(num_lst)
This is probably not the most elegant way to execute this, but it should work. Basically, we loop through the list to find any 1s. When we find an element that is 1, we check if the next element is 0. If it is, then we change the next element to 1. If the next element is 1, that means we should stop changing 0s to 1s, so we jump over that element and proceed with the iteration.

How to write a SQL query to pull a value from a nested json object identified by a variable field name

Problem: how to write a sqlite statement to select a value from a nested json object when the needed name is dynamic / variable. It is also important that this can be done from a single sql statement. Eventually, this will be executed from within a bash script.
In the object sample below, I need to list all the dot11.advertisedssid.ssid in the sql database. An acceptable solution is to list all values of dot11.advertisedssid.ssid that exist in the json object, but I would like to understand how to query a dynamic json name (so I can get the other nested values).
In general I am using json_extract in my sql statement I just can’t figure out how to get to the ssid value (in this example)!
How do I know 733545801 is the field name and how can I then use it in the json_extract statement? And do that for all such nested objects.
Examples:
In general this is how I am querying other json values.
select json_extract(devices.device,'$."dot11.device"."dot11.device.typeset"') from devices;
An object sample from the database:
"dot11.device": {
"dot11.device.typeset": 257,
"dot11.device.client_map": {
},
"dot11.device.num_client_aps": 0,
"dot11.device.advertised_ssid_map": {
"733545801": {
"dot11.advertisedssid.ssid": "SampleFES-WiFi",
"dot11.advertisedssid.ssidlen": 15,
"dot11.advertisedssid.beacon": 1,
"dot11.advertisedssid.probe_response": 1,
"dot11.advertisedssid.channel": "6",
"dot11.advertisedssid.ht_mode": "HT20",
"dot11.advertisedssid.ht_center_1": 0,
"dot11.advertisedssid.ht_center_2": 0,
"dot11.advertisedssid.first_time": 1559567379,
"dot11.advertisedssid.last_time": 1559567379,
"dot11.advertisedssid.beacon_info": "",
"dot11.advertisedssid.cloaked": 0,
"dot11.advertisedssid.crypt_set": 268436162,
"dot11.advertisedssid.maxrate": 65.000000,
"dot11.advertisedssid.beaconrate": 10,
"dot11.advertisedssid.beacons_sec": 2,
"dot11.advertisedssid.ietag_checksum": 1220416683,
"dot11.advertisedssid.wpa_mfp_required": 0,
"dot11.advertisedssid.wpa_mfp_supported": 0,
"dot11.advertisedssid.dot11d_country": "",
"dot11.advertisedssid.dot11d_list": [
],
"dot11.advertisedssid.wps_state": 0,
"dot11.advertisedssid.dot11r_mobility": 0,
"dot11.advertisedssid.dot11r_mobility_domain_id": 0,
"dot11.advertisedssid.dot11e_qbss": 0,
"dot11.advertisedssid.dot11e_qbss_stations": 0,
"dot11.advertisedssid.dot11e_channel_utilization_perc": 0.000000,
"dot11.advertisedssid.ccx_txpower": 0,
"dot11.advertisedssid.cisco_client_mfp": 0,
"dot11.advertisedssid.ie_tag_list": [
0.000000,
1.000000,
3.000000,
5.000000,
42.000000,
50.000000,
48.000000,
45.000000,
61.000000,
127.000000,
221.000000
]
}
}
Thanks for the help!
PS. This is from the new kismet database and the redesigned schema.
Here is the whole object:
{
"kismet.device.base.manuf": "Texas Instruments",
"kismet.device.base.key": "4202770D00000000_AFB4F569D2380000",
"kismet.device.base.macaddr": "38:D2:69:F5:B4:AF",
"kismet.device.base.phyname": "IEEE802.11",
"kismet.device.base.phyid": 0,
"kismet.device.base.name": "LincolnFES-WiFi",
"kismet.device.base.commonname": "LincolnFES-WiFi",
"kismet.device.base.type": "Wi-Fi AP",
"kismet.device.base.basic_type_set": 1,
"kismet.device.base.crypt": "WPA2-PSK",
"kismet.device.base.basic_crypt_set": 2,
"kismet.device.base.first_time": 1559567379,
"kismet.device.base.last_time": 1559567379,
"kismet.device.base.mod_time": 1559567380,
"kismet.device.base.packets.total": 3,
"kismet.device.base.packets.rx": 0,
"kismet.device.base.packets.tx": 0,
"kismet.device.base.packets.llc": 3,
"kismet.device.base.packets.error": 0,
"kismet.device.base.packets.data": 0,
"kismet.device.base.packets.crypt": 0,
"kismet.device.base.packets.filtered": 0,
"kismet.device.base.datasize": 0,
"kismet.device.base.packets.rrd": {
"kismet.common.rrd.last_time": 1559567383,
"kismet.common.rrd.minute_vec": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"kismet.common.rrd.blank_val": 0,
"kismet.common.rrd.aggregator": "default",
"kismet.common.rrd.hour_vec": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"kismet.common.rrd.day_vec": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
},
"kismet.device.base.signal": {
"kismet.common.signal.type": "dbm",
"kismet.common.signal.last_signal": -56,
"kismet.common.signal.last_noise": 0,
"kismet.common.signal.min_signal": -74,
"kismet.common.signal.min_noise": 0,
"kismet.common.signal.max_signal": -56,
"kismet.common.signal.max_noise": 0,
"kismet.common.signal.maxseenrate": 10,
"kismet.common.signal.encodingset": 1,
"kismet.common.signal.carrierset": 1,
"kismet.common.signal.signal_rrd": {
"kismet.common.rrd.last_time": 1559567383,
"kismet.common.rrd.minute_vec": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"kismet.common.rrd.blank_val": 0,
"kismet.common.rrd.aggregator": "peak_signal"
}
},
"kismet.device.base.freq_khz_map": {
"2437000.000000": 1,
"2442000.000000": 1,
"5500000.000000": 1
},
"kismet.device.base.channel": "6",
"kismet.device.base.frequency": 2442000,
"kismet.device.base.num_alerts": 0,
"kismet.device.base.tags": {
},
"kismet.device.base.seenby": {
"-1970862229": {
"kismet.common.seenby.uuid": "5FE308BD-0000-0000-0000-00C0CAA60413",
"kismet.common.seenby.first_time": 1559567379,
"kismet.common.seenby.last_time": 1559567379,
"kismet.common.seenby.num_packets": 3,
"kismet.common.seenby.freq_khz_map": {
"2437000.000000": 1,
"2442000.000000": 1,
"5500000.000000": 1
},
"kismet.common.seenby.signal": {
"kismet.common.signal.type": "dbm",
"kismet.common.signal.last_signal": -56,
"kismet.common.signal.last_noise": 0,
"kismet.common.signal.min_signal": -74,
"kismet.common.signal.min_noise": 0,
"kismet.common.signal.max_signal": -56,
"kismet.common.signal.max_noise": 0,
"kismet.common.signal.maxseenrate": 10,
"kismet.common.signal.encodingset": 1,
"kismet.common.signal.carrierset": 1,
"kismet.common.signal.signal_rrd": {
"kismet.common.rrd.last_time": 1559567383,
"kismet.common.rrd.minute_vec": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"kismet.common.rrd.blank_val": 0,
"kismet.common.rrd.aggregator": "peak_signal"
}
}
}
},
"kismet.device.base.server_uuid": "A8F71A2C-85F8-11E9-BA41-4B49534D4554",
"dot11.device": {
"dot11.device.typeset": 257,
"dot11.device.client_map": {
},
"dot11.device.num_client_aps": 0,
"dot11.device.advertised_ssid_map": {
"733545801": {
"dot11.advertisedssid.ssid": "LincolnFES-WiFi",
"dot11.advertisedssid.ssidlen": 15,
"dot11.advertisedssid.beacon": 1,
"dot11.advertisedssid.probe_response": 1,
"dot11.advertisedssid.channel": "6",
"dot11.advertisedssid.ht_mode": "HT20",
"dot11.advertisedssid.ht_center_1": 0,
"dot11.advertisedssid.ht_center_2": 0,
"dot11.advertisedssid.first_time": 1559567379,
"dot11.advertisedssid.last_time": 1559567379,
"dot11.advertisedssid.beacon_info": "",
"dot11.advertisedssid.cloaked": 0,
"dot11.advertisedssid.crypt_set": 268436162,
"dot11.advertisedssid.maxrate": 65,
"dot11.advertisedssid.beaconrate": 10,
"dot11.advertisedssid.beacons_sec": 2,
"dot11.advertisedssid.ietag_checksum": 1220416683,
"dot11.advertisedssid.wpa_mfp_required": 0,
"dot11.advertisedssid.wpa_mfp_supported": 0,
"dot11.advertisedssid.dot11d_country": "",
"dot11.advertisedssid.dot11d_list": [
],
"dot11.advertisedssid.wps_state": 0,
"dot11.advertisedssid.dot11r_mobility": 0,
"dot11.advertisedssid.dot11r_mobility_domain_id": 0,
"dot11.advertisedssid.dot11e_qbss": 0,
"dot11.advertisedssid.dot11e_qbss_stations": 0,
"dot11.advertisedssid.dot11e_channel_utilization_perc": 0,
"dot11.advertisedssid.ccx_txpower": 0,
"dot11.advertisedssid.cisco_client_mfp": 0,
"dot11.advertisedssid.ie_tag_list": [
0,
1,
3,
5,
42,
50,
48,
45,
61,
127,
221
]
}
},
"dot11.device.num_advertised_ssids": 1,
"dot11.device.probed_ssid_map": {
},
"dot11.device.num_probed_ssids": 0,
"dot11.device.associated_client_map": {
},
"dot11.device.num_associated_clients": 0,
"dot11.device.client_disconnects": 0,
"dot11.device.last_sequence": 0,
"dot11.device.bss_timestamp": 0,
"dot11.device.num_fragments": 0,
"dot11.device.num_retries": 0,
"dot11.device.datasize": 0,
"dot11.device.datasize_retry": 0,
"dot11.device.last_probed_ssid_csum": 0,
"dot11.device.last_beaconed_ssid": "LincolnFES-WiFi",
"dot11.device.last_beaconed_ssid_checksum": 733545801,
"dot11.device.last_bssid": "38:D2:69:F5:B4:AF",
"dot11.device.last_beacon_timestamp": 1559567379,
"dot11.device.wps_m3_count": 0,
"dot11.device.wps_m3_last": 0,
"dot11.device.wpa_handshake_list": [
],
"dot11.device.wpa_nonce_list": [
],
"dot11.device.wpa_anonce_list": [
],
"dot11.device.wpa_present_handshake": 0,
"dot11.device.min_tx_power": 0,
"dot11.device.max_tx_power": 0,
"dot11.device.supported_channels": [
],
"dot11.device.link_measurement_capable": 0,
"dot11.device.neighbor_report_capable": 0,
"dot11.device.extended_capabilities": [
],
"dot11.device.beacon_fingerprint": 4212996422,
"dot11.device.probe_fingerprint": 0,
"dot11.device.response_fingerprint": 0
}
}
When you want to recursively walk through the fields of an entire object and its contents, you need json_tree():
SELECT j.value
FROM devices AS d
JOIN json_tree(d.device) AS j
WHERE j.key = 'dot11.advertisedssid.ssid';
gives
value
--------------
SampleFES-WiFi
when run on a table holding a fixed version of that sample object.
I know this is a bit old, but OP seemed (in comments) to want a more complete solution. I know I did when I first came across this answer. The accepted solution allows you to pull in one field from the JSON blob, but the common use case in OP's example is to pull multiple fields from that blob. After some searching I found that the json_extract() function works very well for this once you realize that the "dot11.device.advertised_ssid_map" object is an array. Once you provide it with an index his normal query method works.
Considerations:
OP's example is relating to the Kismet device field in the devices table, so my example will use a common query that I often need in the context of that table
With Kismet the keys used in these JSON blobs are long and contain dots, so the syntax for specifying them in SQLite3 is a bit cumbersome for some nested values
SQLite3's JSON1 extension does not seem to like some of the wildcarding syntax normally allowed in JSONPath specifications, so long explicit paths are required
So here is my solution:
SELECT devmac, strongest_signal,
json_extract(d.device, '$."dot11.device"."dot11.device.advertised_ssid_map"[0]."dot11.advertisedssid.ssid"') AS ssid,
json_extract(d.device, '$."dot11.device"."dot11.device.advertised_ssid_map"[0]."dot11.advertisedssid.cloaked"') AS cloaked,
json_extract(d.device, '$."kismet.device.base.signal"."kismet.common.signal.min_signal"') AS weakest_signal,
json_extract(d.device, '$."kismet.device.base.channel"') AS channel,
json_extract(d.device, '$."dot11.device"."dot11.device.num_associated_clients"') AS clientCnt,
json_extract(d.device, '$."kismet.device.base.crypt"') AS crypt,
json_extract(d.device, '$."kismet.device.base.manuf"') AS manuf
FROM devices AS d
WHERE type = 'Wi-Fi AP'
;

How to build word embedding model using Tflearn?

Updated
I am working on the word embedding model for answer Matching score prediction using Tflearn. I have to build a model using sentence vector using tflearn dnn classifier, Now I have to add a word embedding layer to the dnn model. How to do that? Thanks in advance.
"JVMdefines": enables a computer to run a Java program
is coverted as :
"JVMdefines": [[list([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
enables a computer to run a Java program :
list([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])]
My question: Is there any method that the machine can able to analyze.
enables a "machine" to run a Java program
That is It can detect computer and machine as in same meaning.
I would post a clarifying comment, but I do not have enough reputation to do so, so I will try to answer given the information you have presented in the original question...
Your problem seems unclear, but here is how you would do this for a binary classification problem in tflearn.
Step 1: Preprocessing
First thing you need to do is to tokenize and transform your sentences into list of integers:
"What kind of food do you like?" ---> [234,64,12,5224,43,96,23]
Then, most people pad their sequences to all be the same length, cutting off the long ones or increasing the length of short ones by padding with 0's.
[234,64,12,5224,43,96,23] ---> [0,0,0,0....234,64,12,5224,43,96,23]
Hint:
from tflearn.data_utils import pad_sequences
padded = pad_sequences(unpadded, maxlen=max_document_length, value=0.)
Step 2: Model Building
After you transform all the text you have into integer sequences, you can build the model. Note here that our input shape is [None, max_document_length]. None means optional size (allows for variable batch size), and max_document_length is the length of our sequences that we padded previously.
#Create our model
network = input_data(shape=[None, max_document_length], name='input')
Create embedding matrix. Note that you push the embedding matrix to the CPU. The input dim parameter is looking for an integer that represents the size of your vocabulary. the output_dim is the size of your embedding.
with tf.device('/cpu:0'):
network = tflearn.embedding(network, input_dim=vocabulary_size, output_dim=128)
#Pass embeddings into an lstm layer (handles sequential problems)
network = tflearn.lstm(network, 512, dropout=0.8)
#Squish data into a fully connected layer, with 2 outputs for binary classification
network = tflearn.fully_connected(network, 2, activation='softmax')
#Perform regression to get the final anaswer
network = tflearn.regression(network, optimizer='rmsprop', learning_rate=0.001,
loss='categorical_crossentropy')
#Wrap the graph we just created in a tflearn DNN wrapper
model = tflearn.DNN(network)
#Run model.fit to actually train your model
model.fit(x_train, y_train, n_epoch=15, shuffle=True, validation_set=(x_val, y_val), show_metric=True, batch_size=batch_size)