how to add repitition index and unstack in pandas? - pandas

dataframe as follows:
time a b c d e
2006/1/16 249 249 250 250 251
2006/2/15 254 253 255 255 255
2006/3/16 261 261 262 262 264
2006/4/16 272 271 273 273 274
2006/5/16 282 281 283 283 283
2006/6/16 288 287 289 289 289
2006/7/16 292 292 293 293 293
2006/8/16 290 290 291 291 292
2006/9/16 282 281 283 283 284
2006/10/16 271 270 272 272 273
2006/11/16 259 258 260 260 261
2006/12/16 251 251 252 252 253
2007/1/16 247 247 247 248 250
2007/2/15 253 253 254 254 255
2007/3/16 261 261 262 262 264
2007/4/16 273 272 274 274 275
2007/5/16 282 281 283 283 283
2007/6/16 288 288 290 289 290
2007/7/16 292 292 293 293 294
2007/8/16 291 290 291 291 292
2007/9/16 282 282 283 283 284
2007/10/16 271 270 272 272 273
2007/11/16 260 259 261 261 262
I want to unstack as
a 1 2 3 4 5 6 7 8 9 10 11 12
2006 .......................................
2007 .......................................
b 2006 .......................................
2007 .......................................
.......................................
c 2006
d ...............................................
e 2007 .......................................
pandas timestamps cound be apply on it? And how to generate year and month index if there is no time columns.
year month
2006 1
2006 2
... ..
2006 12
2007 1
2007 2
... ...
2007 12

I'd construct a new pd.Series from numpy arrays and unstack
df.time = pd.to_datetime(df.time)
cols = list('abcde')
n, m = len(df), len(cols)
v = np.concatenate([df[c].values for c in cols])
i = np.repeat(cols, n)
y = np.tile(df.time.dt.year.values, m)
m = np.tile(df.time.dt.month.values, m)
pd.Series(v, pd.MultiIndex.from_arrays([i, y, m])).unstack(fill_value=0)
1 2 3 4 5 6 7 8 9 10 11 12
a 2006 249 254 261 272 282 288 292 290 282 271 259 251
2007 247 253 261 273 282 288 292 291 282 271 260 0
b 2006 249 253 261 271 281 287 292 290 281 270 258 251
2007 247 253 261 272 281 288 292 290 282 270 259 0
c 2006 250 255 262 273 283 289 293 291 283 272 260 252
2007 247 254 262 274 283 290 293 291 283 272 261 0
d 2006 250 255 262 273 283 289 293 291 283 272 260 252
2007 248 254 262 274 283 289 293 291 283 272 261 0
e 2006 251 255 264 274 283 289 293 292 284 273 261 253
2007 250 255 264 275 283 290 294 292 284 273 262 0

Use to_datetime first, then create MultiIndex.from_arrays with year and
month and assign to index. Then remove column time and unstack, last transpose by T:
df['time'] = pd.to_datetime(df['time'])
df.index = pd.MultiIndex.from_arrays([df['time'].dt.month, df['time'].dt.year],
names=(None, None))
df = df.drop('time', axis=1).unstack(fill_value=0).T
print (df)
1 2 3 4 5 6 7 8 9 10 11 12
a 2006 249 254 261 272 282 288 292 290 282 271 259 251
2007 247 253 261 273 282 288 292 291 282 271 260 0
b 2006 249 253 261 271 281 287 292 290 281 270 258 251
2007 247 253 261 272 281 288 292 290 282 270 259 0
c 2006 250 255 262 273 283 289 293 291 283 272 260 252
2007 247 254 262 274 283 290 293 291 283 272 261 0
d 2006 250 255 262 273 283 289 293 291 283 272 260 252
2007 248 254 262 274 283 289 293 291 283 272 261 0
e 2006 251 255 264 274 283 289 293 292 284 273 261 253
2007 250 255 264 275 283 290 294 292 284 273 262 0

Related

How to make all the rows data drop the similar data and multiplying float numbers

How to separate all of the columns?
df= df[['hlogUs_dB','hlogDs_dB']]
df
hlogUs_dB hlogDs_dB
0 109:-3.4,110:-3.4,111:-3.4,112:-3.5,113:-3.5,1... 5:-2.5,6:-2.5,7:-2.1,8:-2.0,9:-2.0,10:-2.0,11:...
1 109:-3.5,110:-3.5,111:-3.4,112:-3.4,113:-3.4,1... 5:-2.1,6:-2.0,7:-1.8,8:-1.8,9:-1.8,10:-1.8,11:...
2 109:-3.7,110:-3.7,111:-3.8,112:-3.8,113:-3.8,1... 5:-2.1,6:-2.0,7:-1.8,8:-1.8,9:-1.8,10:-1.8,11:...
3 109:-3.5,110:-3.6,111:-3.6,112:-3.6,113:-3.7,1... 5:-2.5,6:-2.5,7:-2.1,8:-2.0,9:-2.0,10:-2.0,11:...
4 109:-3.7,110:-3.8,111:-3.8,112:-3.8,113:-3.8,1... 5:-2.5,6:-2.5,7:-2.1,8:-2.1,9:-2.0,10:-2.1,11:...
... ... ...
165 109:-5.2,110:-5.3,111:-5.5,112:-5.7,113:-5.9,1... 5:-2.5,6:-2.5,7:-2.1,8:-2.1,9:-2.1,10:-2.2,11:...
166 109:-5.5,110:-5.6,111:-5.8,112:-6.1,113:-6.3,1... 5:-2.8,6:-2.7,7:-2.5,8:-2.5,9:-2.3,10:-2.5,11:...
167 109:-6.0,110:-6.2,111:-6.4,112:-6.7,113:-7.1,1... 5:-2.6,6:-2.5,7:-2.2,8:-2.2,9:-2.2,10:-2.3,11:...
168 109:-5.4,110:-5.5,111:-5.7,112:-5.9,113:-6.2,1... 5:-3.0,6:-3.0,7:-2.6,8:-2.5,9:-2.5,10:-2.5,11:...
169 109:-5.9,110:-6.1,111:-6.4,112:-6.6,113:-7.0,1... 5:-2.7,6:-2.5,7:-2.3,8:-2.2,9:-2.3,10:-2.3,11:...
170 rows × 2 columns
<After that I split using delimiter for only hlogUs_dB/>
df2 =df['hlogUs_dB'].str.split('[,:]',expand = True)
df2 = data.drop(["0"])
df2
The result :
0 1 2 3 4 5 6 7 8 9 ... 276 277 278 279 280 281 282 283 284 285
0 109 -3.4 110 -3.4 111 -3.4 112 -3.5 113 -3.5 ... 343 -4.3 344 -4.3 345 -4.2 346 -4.2 347 -4.2
1 109 -3.5 110 -3.5 111 -3.4 112 -3.4 113 -3.4 ... 343 -4.1 344 -4.2 345 -4.4 346 -4.4 347 -4.2
2 109 -3.7 110 -3.7 111 -3.8 112 -3.8 113 -3.8 ... 343 -4.2 344 -4.3 345 -4.3 346 -4.3 347 -4.3
3 109 -3.5 110 -3.6 111 -3.6 112 -3.6 113 -3.7 ... 343 -4.1 344 -4.1 345 -4.1 346 -4.1 347 -4.1
4 109 -3.7 110 -3.8 111 -3.8 112 -3.8 113 -3.8 ... 343 -4.2 344 -4.2 345 -4.2 346 -4.2 347 -4.3
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
165 109 -5.2 110 -5.3 111 -5.5 112 -5.7 113 -5.9 ... 343 -5.4 344 -5.3 345 -5.2 346 -5.1 347 -5.1
166 109 -5.5 110 -5.6 111 -5.8 112 -6.1 113 -6.3 ... 343 -5.5 344 -5.4 345 -5.3 346 -5.2 347 -5.2
167 109 -6.0 110 -6.2 111 -6.4 112 -6.7 113 -7.1 ... 343 -4.9 344 -4.9 345 -4.9 346 -4.9 347 -4.9
168 109 -5.4 110 -5.5 111 -5.7 112 -5.9 113 -6.2 ... 343 -5.9 344 -5.7 345 -5.7 346 -5.6 347 -5.6
169 109 -5.9 110 -6.1 111 -6.4 112 -6.6 113 -7.0 ... 343 -5.7 344 -5.7 345 -5.7 346 -5.6 347 -5.6
170 rows × 286 columns
After that I want to drop the same number that appear only on even columns. I manage to found the solutions but somehow or rather, it does not suit my preference.
df2.drop(columns=[0,2,4,6,8,9,10,12,14,16,18,20,22,24,26,28,30,32,34,36])
df2
the output:
0 1 2 3 4 5 6 7 8 9 ... 276 277 278 279 280 281 282 283 284 285
0 109 -3.4 110 -3.4 111 -3.4 112 -3.5 113 -3.5 ... 343 -4.3 344 -4.3 345 -4.2 346 -4.2 347 -4.2
1 109 -3.5 110 -3.5 111 -3.4 112 -3.4 113 -3.4 ... 343 -4.1 344 -4.2 345 -4.4 346 -4.4 347 -4.2
2 109 -3.7 110 -3.7 111 -3.8 112 -3.8 113 -3.8 ... 343 -4.2 344 -4.3 345 -4.3 346 -4.3 347 -4.3
3 109 -3.5 110 -3.6 111 -3.6 112 -3.6 113 -3.7 ... 343 -4.1 344 -4.1 345 -4.1 346 -4.1 347 -4.1
4 109 -3.7 110 -3.8 111 -3.8 112 -3.8 113 -3.8 ... 343 -4.2 344 -4.2 345 -4.2 346 -4.2 347 -4.3
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
165 109 -5.2 110 -5.3 111 -5.5 112 -5.7 113 -5.9 ... 343 -5.4 344 -5.3 345 -5.2 346 -5.1 347 -5.1
166 109 -5.5 110 -5.6 111 -5.8 112 -6.1 113 -6.3 ... 343 -5.5 344 -5.4 345 -5.3 346 -5.2 347 -5.2
167 109 -6.0 110 -6.2 111 -6.4 112 -6.7 113 -7.1 ... 343 -4.9 344 -4.9 345 -4.9 346 -4.9 347 -4.9
168 109 -5.4 110 -5.5 111 -5.7 112 -5.9 113 -6.2 ... 343 -5.9 344 -5.7 345 -5.7 346 -5.6 347 -5.6
169 109 -5.9 110 -6.1 111 -6.4 112 -6.6 113 -7.0 ... 343 -5.7 344 -5.7 345 -5.7 346 -5.6 347 -5.6
170 rows × 286 columns
still show the same as before, I just want the odd columns to be multiply with 8 and float 4.3125. Then later the data will replace on the same columns, that was my roughly ideas.
df2*4.3125
the results contain error.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in na_arithmetic_op(left, right, op, str_rep)
148 try:
--> 149 result = expressions.evaluate(op, str_rep, left, right)
150 except TypeError:
~\anaconda3\lib\site-packages\pandas\core\computation\expressions.py in evaluate(op, op_str, a, b, use_numexpr)
207 if use_numexpr:
--> 208 return _evaluate(op, op_str, a, b)
209 return _evaluate_standard(op, op_str, a, b)
~\anaconda3\lib\site-packages\pandas\core\computation\expressions.py in _evaluate_numexpr(op, op_str, a, b)
120 if result is None:
--> 121 result = _evaluate_standard(op, op_str, a, b)
122
~\anaconda3\lib\site-packages\pandas\core\computation\expressions.py in _evaluate_standard(op, op_str, a, b)
69 with np.errstate(all="ignore"):
---> 70 return op(a, b)
71
TypeError: can't multiply sequence by non-int of type 'float'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-24-424060d3aad6> in <module>
----> 1 df2*4.3125
~\anaconda3\lib\site-packages\pandas\core\ops\__init__.py in f(self, other, axis, level, fill_value)
717 self = self.fillna(fill_value)
718
--> 719 new_data = dispatch_to_series(self, other, op, str_rep)
720 return self._construct_result(new_data)
721
~\anaconda3\lib\site-packages\pandas\core\ops\__init__.py in dispatch_to_series(left, right, func, str_rep, axis)
376 # Get the appropriate array-op to apply to each block's values.
377 array_op = get_array_op(func, str_rep=str_rep)
--> 378 bm = left._data.apply(array_op, right=right)
379 return type(left)(bm)
380
~\anaconda3\lib\site-packages\pandas\core\internals\managers.py in apply(self, f, filter, **kwargs)
438
439 if callable(f):
--> 440 applied = b.apply(f, **kwargs)
441 else:
442 applied = getattr(b, f)(**kwargs)
~\anaconda3\lib\site-packages\pandas\core\internals\blocks.py in apply(self, func, **kwargs)
388 """
389 with np.errstate(all="ignore"):
--> 390 result = func(self.values, **kwargs)
391
392 if is_extension_array_dtype(result) and result.ndim > 1:
~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in arithmetic_op(left, right, op, str_rep)
195 else:
196 with np.errstate(all="ignore"):
--> 197 res_values = na_arithmetic_op(lvalues, rvalues, op, str_rep)
198
199 return res_values
~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in na_arithmetic_op(left, right, op, str_rep)
149 result = expressions.evaluate(op, str_rep, left, right)
150 except TypeError:
--> 151 result = masked_arith_op(left, right, op)
152
153 return missing.dispatch_fill_zeros(op, left, right, result)
~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in masked_arith_op(x, y, op)
110 if mask.any():
111 with np.errstate(all="ignore"):
--> 112 result[mask] = op(xrav[mask], y)
113
114 result, _ = maybe_upcast_putmask(result, ~mask, np.nan)
TypeError: can't multiply sequence by non-int of type 'float'
I am stuck at the area and have searched on the Stack Overflow, youtube about the basic for multiplying in terms of float but, I think my keywords is not on par with the ideas.

Save file from pixel values in .net core

There is a space-separated string contains 2304(48x48) items. I need simply save this as a 48x48 image file. Downloaded from here
var img = "70 80 82 72 58 58 60 63 54 58 60 48 89 115 121 119 115 110 98 91 84 84 90 99 110 126 143 153 158 171 169 172 169 165 129 110 113 107 95 79 66 62 56 57 61 52 43 41 65 61 58 57 56 69 75 70 65 56 54 105 146 154 151 151 155 155 150 147 147 148 152 158 164 172 177 182 186 189 188 190 188 180 167 116 95 103 97 77 72 62 55 58 54 56 52 44 50 43 54 64 63 71 68 64 52 66 119 156 161 164 163 164 167 168 170 174 175 176 178 179 183 187 190 195 197 198 197 198 195 191 190 145 86 100 90 65 57 60 54 51 41 49 56 47 38 44 63 55 46 52 54 55 83 138 157 158 165 168 172 171 173 176 179 179 180 182 185 187 189 189 192 197 200 199 196 198 200 198 197 177 91 87 96 58 58 59 51 42 37 41 47 45 37 35 36 30 41 47 59 94 141 159 161 161 164 170 171 172 176 178 179 182 183 183 187 189 192 192 194 195 200 200 199 199 200 201 197 193 111 71 108 69 55 61 51 42 43 56 54 44 24 29 31 45 61 72 100 136 150 159 163 162 163 170 172 171 174 177 177 180 187 186 187 189 192 192 194 195 196 197 199 200 201 200 197 201 137 58 98 92 57 62 53 47 41 40 51 43 24 35 52 63 75 104 129 143 149 158 162 164 166 171 173 172 174 178 178 179 187 188 188 191 193 194 195 198 199 199 197 198 197 197 197 201 164 52 78 87 69 58 56 50 54 39 44 42 26 31 49 65 91 119 134 145 147 152 159 163 167 171 170 169 174 178 178 179 187 187 185 187 190 188 187 191 197 201 199 199 200 197 196 197 182 58 62 77 61 60 55 49 59 52 54 44 22 30 47 68 102 123 136 144 148 150 153 157 167 172 173 170 171 177 179 178 186 190 186 189 196 193 191 194 190 190 192 197 201 203 199 194 189 69 48 74 56 60 57 50 59 59 51 41 20 34 47 79 111 132 139 143 145 147 150 151 160 169 172 171 167 171 177 177 174 180 182 181 192 196 189 192 198 195 194 196 198 201 202 195 189 70 39 69 61 61 61 53 59 59 45 40 26 40 61 93 124 135 138 142 144 146 151 152 158 165 168 168 165 161 164 173 172 167 172 167 180 198 198 193 199 195 194 198 200 198 197 195 190 65 35 68 59 59 62 57 60 59 50 44 32 54 90 115 132 137 138 140 144 146 146 156 165 168 174 176 176 175 168 168 169 171 175 171 172 192 194 184 198 205 201 194 195 193 195 192 186 57 38 72 65 57 62 58 57 60 54 49 47 79 116 130 138 141 141 139 141 143 145 157 164 164 166 173 174 176 179 179 176 181 189 188 173 180 175 160 182 189 198 192 189 190 190 188 172 46 44 64 66 59 62 57 56 62 53 50 66 103 133 137 141 143 141 136 132 131 136 127 118 111 107 108 123 131 143 154 158 166 177 181 175 170 159 148 171 161 176 185 192 194 188 190 162 53 49 58 63 61 61 55 56 61 51 50 81 116 139 142 142 146 144 136 128 119 112 97 85 90 91 88 92 90 80 81 84 106 122 132 144 145 144 147 163 147 163 173 181 190 187 191 167 61 48 53 61 61 58 54 56 61 51 53 89 123 140 144 145 146 147 136 122 107 99 95 92 90 87 83 76 67 52 46 52 63 69 83 96 119 132 148 159 136 137 143 138 143 152 156 156 70 48 50 59 61 57 54 54 61 52 56 93 124 135 140 144 148 150 140 125 114 101 80 54 56 54 41 41 33 40 39 35 49 60 63 74 107 129 147 147 116 111 100 77 76 86 108 111 73 49 50 60 62 60 57 55 63 59 56 89 121 134 139 146 151 152 150 141 127 111 96 77 85 70 32 31 37 91 65 50 48 59 73 83 112 136 155 130 60 46 38 40 43 81 116 91 72 52 48 58 62 62 59 53 61 59 52 85 114 134 140 147 154 159 158 153 145 143 150 126 121 125 68 45 89 137 95 70 78 75 95 109 131 153 171 94 23 16 32 82 82 65 113 77 71 54 48 56 62 62 60 53 60 56 52 75 108 133 141 149 158 166 169 167 163 156 155 146 112 119 134 127 142 140 121 117 129 114 120 129 146 174 191 98 46 33 33 109 147 98 109 67 73 55 50 56 64 64 61 58 61 53 54 64 106 129 140 148 159 169 175 176 174 165 159 156 145 120 115 124 127 131 133 141 147 142 141 147 161 182 202 154 114 96 100 158 158 153 123 61 76 57 48 56 64 64 63 62 61 54 55 44 97 131 137 147 158 168 177 181 183 179 170 168 169 165 155 152 151 152 154 162 165 158 153 158 168 187 206 186 147 135 144 145 152 178 115 57 74 58 48 58 64 63 63 59 63 55 53 66 104 130 132 144 153 162 170 180 185 187 181 178 182 180 177 173 171 171 177 176 172 164 161 167 164 185 207 197 173 152 141 141 161 191 104 54 69 60 48 57 65 62 60 57 64 55 50 94 111 124 130 135 150 159 163 172 179 184 184 178 178 177 173 171 174 177 178 176 169 165 161 163 161 180 205 201 183 171 177 178 180 194 101 55 65 60 47 55 65 63 59 58 63 57 52 90 105 117 122 130 143 153 157 163 171 174 182 183 182 178 174 175 175 177 175 172 163 161 159 157 162 178 200 201 188 181 172 177 187 198 98 57 63 61 48 52 61 64 63 60 65 57 51 95 104 113 117 127 136 145 152 156 162 162 165 173 177 182 183 183 180 181 177 165 153 154 152 153 160 174 193 200 188 185 180 182 192 196 101 60 60 56 49 50 60 66 64 62 64 59 53 99 104 111 112 118 132 142 147 155 158 160 159 162 171 176 184 186 183 180 169 154 141 135 145 155 164 180 196 205 188 189 188 189 193 192 98 61 64 55 49 49 60 66 63 64 63 60 57 99 105 108 112 113 125 139 143 150 155 158 164 169 174 176 182 183 182 177 163 141 133 147 151 164 170 185 200 210 194 188 192 186 185 180 88 64 67 60 46 50 59 65 64 64 64 59 56 101 103 108 109 109 118 134 143 143 147 155 159 166 171 174 177 179 178 172 153 129 143 161 159 166 171 186 197 207 203 185 191 183 179 164 73 67 67 66 48 50 57 65 65 63 64 61 57 103 108 114 112 110 115 128 138 144 145 152 156 159 164 168 172 172 169 161 139 125 147 156 161 162 164 180 188 188 197 185 187 181 180 137 65 70 68 70 52 47 53 62 65 63 65 61 58 105 109 112 120 113 112 122 134 141 149 150 153 155 159 164 167 167 162 152 134 115 126 119 106 99 109 141 158 150 155 175 184 176 175 106 63 70 68 68 50 46 50 57 63 63 64 61 59 107 110 110 117 117 114 117 128 137 147 148 150 153 156 161 162 163 156 150 148 105 70 45 26 25 47 73 74 79 128 177 180 173 157 77 66 68 67 68 52 49 51 56 62 62 62 62 60 101 107 108 114 115 114 117 125 134 143 148 149 152 154 158 160 158 155 160 158 132 88 73 73 64 52 66 91 138 160 174 173 171 125 64 67 63 64 68 54 50 49 54 60 60 60 62 60 98 105 105 109 111 114 117 125 131 139 145 148 153 153 156 157 156 161 168 165 153 139 122 115 105 89 103 150 182 161 171 173 162 89 64 64 62 64 69 56 48 49 56 58 60 59 62 60 89 99 108 106 109 111 119 120 125 134 140 146 152 153 153 153 156 159 162 160 150 136 129 133 133 122 133 148 178 168 168 175 132 61 67 66 65 63 69 57 47 50 55 58 59 61 62 60 89 96 105 107 105 107 117 120 123 124 133 141 149 153 151 145 151 145 139 140 138 128 126 124 129 125 136 142 164 172 168 168 87 58 67 63 62 61 69 57 39 44 55 56 59 63 62 62 84 91 92 98 102 103 113 119 121 118 128 138 146 151 147 142 140 128 127 128 129 126 135 140 135 130 143 146 149 166 174 131 62 65 62 59 67 63 68 83 89 65 42 52 60 60 62 63 77 84 84 91 99 101 107 112 117 118 122 134 145 149 144 134 127 127 129 130 134 125 126 132 152 153 151 150 151 165 171 87 59 65 64 61 58 86 122 138 208 207 154 71 52 56 55 56 69 77 83 85 93 91 102 112 116 118 119 127 140 144 142 131 112 95 85 75 62 58 56 59 87 88 83 127 142 165 149 62 65 62 59 77 113 192 156 84 185 196 197 168 81 70 75 69 58 65 73 82 81 79 95 107 114 116 116 123 136 142 136 132 131 102 71 58 49 41 33 41 36 49 60 99 136 168 111 53 63 71 138 186 203 195 146 87 91 72 79 95 103 82 61 74 55 57 68 75 76 77 84 96 106 110 111 121 130 138 136 142 153 159 152 152 154 145 133 136 147 158 156 155 147 158 74 57 60 123 181 174 126 89 72 67 57 43 55 67 76 86 60 45 51 45 52 68 75 73 77 88 96 100 104 113 115 121 134 146 149 146 149 148 155 168 174 179 178 169 169 174 161 131 44 47 82 150 168 136 104 75 66 80 67 58 48 54 68 88 121 102 51 45 38 53 66 65 70 86 92 96 102 103 109 116 130 136 136 133 136 138 137 135 128 130 143 158 165 164 147 87 62 74 123 160 170 100 99 107 79 71 86 75 57 45 49 65 122 130 43 48 40 39 55 61 59 71 82 87 88 93 105 118 123 128 130 124 111 98 94 88 67 55 84 129 147 148 105 48 82 142 161 164 164 76 72 85 100 88 72 90 84 54 48 54 73 100 73 36 44 31 37 53 51 55 67 74 77 87 97 108 118 125 132 122 106 86 80 82 75 73 83 110 129 126 46 22 130 177 196 193 166 72 52 54 73 100 92 75 99 95 65 68 61 63 91 65 42 37 22 28 39 44 57 68 74 83 92 101 119 131 143 141 134 136 140 139 134 136 139 138 136 85 23 114 202 198 199 180 173 98 36 86 130 150 137 99 77 101 99 72 56 43 77 82 79 70 56 28 20 25 36 50 63 73 83 98 111 124 139 156 160 159 169 168 165 163 159 149 114 43 26 133 183 192 177 152 137 130 125 139 173 195 186 137 101 88 101 105 70 46 77 72 84 87 87 81 64 37 20 31 40 46 65 88 108 110 125 149 157 153 162 164 158 159 154 140 78 21 11 61 144 168 173 157 138 150 148 132 159 182 183 136 106 116 95 106 109 82";
//save string as byte array
var arrrStr = img.Split(" ").Select(s => Convert.ToString(s)).ToArray();
var byt = arrrStr.Select(byte.Parse).ToArray();
//save the file by this array, the line below throws an exception.
using (System.Drawing.Image image = System.Drawing.Image.FromStream(new MemoryStream(byt)))
{
image.Save("output.jpg", ImageFormat.Jpeg); // Or Png
}
And as you guess it doesn't work, how to convert this pixel string to file(this value is generated from a file in origin)

Sorting a string of numbers into a grid

I'm needing to sort a long list of ID numbers into 'grids' of 8 ID numbers down (8 cells/rows), 6 ID numbers across (or 6 columns long etc), sorted from smallest to largest ID number. When one 'grid' is 'full', the numbers which cannot fit in the first grid should go on to form a second one and so on. The last 4 cells of the last row should be blank. (This is a template for a lab procedure).
ie this is the data I have:
column of ID numbers
and this how I want it to be (but like, 6 of these)
example 'grid'
Here's one method.
Sample data
import pandas as pd
import numpy as np
# Sorted list of string IDs
l = np.arange(0, 631, 1).astype('str')
Code
N = 44
# Ensure we can reshape last group
data = np.concatenate((l, np.repeat('', N-len(l)%N)))
# Split array, make a separate `DataFrame` for each grid.
data = [
pd.DataFrame(np.concatenate((x, np.repeat('', 4))).reshape(8,6))
for x in np.array_split(data, np.arange(N, len(l), N))
]
df = pd.concat(data, ignore_index=True) # If want a single df in the end
Output df:
0 1 2 3 4 5
0 0 1 2 3 4 5
1 6 7 8 9 10 11
2 12 13 14 15 16 17
3 18 19 20 21 22 23
4 24 25 26 27 28 29
5 30 31 32 33 34 35
6 36 37 38 39 40 41
7 42 43
8 44 45 46 47 48 49
9 50 51 52 53 54 55
10 56 57 58 59 60 61
11 62 63 64 65 66 67
12 68 69 70 71 72 73
13 74 75 76 77 78 79
14 80 81 82 83 84 85
15 86 87
16 88 89 90 91 92 93
...
110 608 609 610 611 612 613
111 614 615
112 616 617 618 619 620 621
113 622 623 624 625 626 627
114 628 629 630
115
116
117
118
119
func = lambda lst,n: np.pad(lst, (0,n*(1+len(lst)//n) - len(lst)), 'constant')
rows, cols = 8, 6
arr = np.arange(1, 283, 1) ##np.array(df.A)
new_df = pd.DataFrame(func(arr, rows*cols).reshape(-1,cols))
new_df
0 1 2 3 4 5
0 1 2 3 4 5 6
1 7 8 9 10 11 12
2 13 14 15 16 17 18
3 19 20 21 22 23 24
4 25 26 27 28 29 30
5 31 32 33 34 35 36
6 37 38 39 40 41 42
7 43 44 45 46 47 48
8 49 50 51 52 53 54
9 55 56 57 58 59 60
10 61 62 63 64 65 66
11 67 68 69 70 71 72
12 73 74 75 76 77 78
13 79 80 81 82 83 84
14 85 86 87 88 89 90
15 91 92 93 94 95 96
16 97 98 99 100 101 102
17 103 104 105 106 107 108
18 109 110 111 112 113 114
19 115 116 117 118 119 120
20 121 122 123 124 125 126
21 127 128 129 130 131 132
22 133 134 135 136 137 138
23 139 140 141 142 143 144
24 145 146 147 148 149 150
25 151 152 153 154 155 156
26 157 158 159 160 161 162
27 163 164 165 166 167 168
28 169 170 171 172 173 174
29 175 176 177 178 179 180
30 181 182 183 184 185 186
31 187 188 189 190 191 192
32 193 194 195 196 197 198
33 199 200 201 202 203 204
34 205 206 207 208 209 210
35 211 212 213 214 215 216
36 217 218 219 220 221 222
37 223 224 225 226 227 228
38 229 230 231 232 233 234
39 235 236 237 238 239 240
40 241 242 243 244 245 246
41 247 248 249 250 251 252
42 253 254 255 256 257 258
43 259 260 261 262 263 264
44 265 266 267 268 269 270
45 271 272 273 274 275 276
46 277 278 279 280 281 282
47 0 0 0 0 0 0
I think it's better to save this dataframe into an excel worksheet and then remove the last padded zeros manually. Hope this helped

Pandas Dataframe add rows on top of dataframe

I am trying to add blank rows on top of the pandas Dataframe data.
Basically, some blank rows and some calculation for each row which contains calculations for Average etc. for that column. Can someone please help me how I can do this?
From:
A B D E F G H I J
0 -8 10 532 533 533 532 534 532 532
1 -8 12 520 521 523 523 521 521 521
2 -8 14 520 523 522 523 522 521 522
3 -4 2 526 527 527 528 528 527 529
4 -4 4 516 518 517 519 518 516 518
5 -4 6 528 529 530 531 530 528 530
6 -4 8 518 521 521 521 522 519 521
7 -4 10 524 525 525 525 525 524 524
8 -4 12 522 523 524 525 525 522 523
9 -2 2 525 526 527 527 527 525 527
10 -2 4 518 519 519 521 520 519 520
11 -2 6 520 522 522 522 522 520 523
12 -2 8 551 551 552 552 552 550 552
13 -2 10 533 534 535 536 535 534 535
14 -2 12 537 539 539 539 538 537 539
15 -2 14 528 530 530 531 530 529 530
16 -1 2 518 519 519 521 520 518 520
To:
A B D E F G H I J
Average 525.6 527.1 527.4 528.0 527.6 526.0 527.4
Sigma 8.6 8.3 8.5 8.1 8.3 8.3 8.4
Minimum 516 518 517 519 518 516 518
Maximum 551 551 552 552 552 550 552
0 -8 10 532 533 533 532 534 532 532
1 -8 12 520 521 523 523 521 521 521
2 -8 14 520 523 522 523 522 521 522
3 -4 2 526 527 527 528 528 527 529
4 -4 4 516 518 517 519 518 516 518
5 -4 6 528 529 530 531 530 528 530
6 -4 8 518 521 521 521 522 519 521
7 -4 10 524 525 525 525 525 524 524
8 -4 12 522 523 524 525 525 522 523
9 -2 2 525 526 527 527 527 525 527
10 -2 4 518 519 519 521 520 519 520
11 -2 6 520 522 522 522 522 520 523
12 -2 8 551 551 552 552 552 550 552
13 -2 10 533 534 535 536 535 534 535
14 -2 12 537 539 539 539 538 537 539
15 -2 14 528 530 530 531 530 529 530
16 -1 2 518 519 519 521 520 518 520
I'd be sure that this is actually what you want to do, storing both data and summary statistics in the same frame is a little odd. That said, you can use concat to stack dataframes.
In [23]: pd.concat([df.describe(), df])
Out[23]:
A B D E F G ...
count 17.000000 17.0000 17.000000 17.000000 17.000000 17.000000
mean -3.705882 8.0000 525.647059 527.058824 527.352941 528.000000
std 2.284861 4.1833 8.866659 8.518147 8.760288 8.396428
min -8.000000 2.0000 516.000000 518.000000 517.000000 519.000000
25% -4.000000 4.0000 520.000000 521.000000 522.000000 522.000000
50% -4.000000 8.0000 524.000000 525.000000 525.000000 525.000000
75% -2.000000 12.0000 528.000000 530.000000 530.000000 531.000000
max -1.000000 14.0000 551.000000 551.000000 552.000000 552.000000
0 -8.000000 10.0000 532.000000 533.000000 533.000000 532.000000
1 -8.000000 12.0000 520.000000 521.000000 523.000000 523.000000
2 -8.000000 14.0000 520.000000 523.000000 522.000000 523.000000
...

How to encode a text file using ASMO449+? .NET

Dear All, How I can encode a text file to ASMO449+?
Thanks
That's code page 709. Difficult, .NET doesn't support it. Best thing to do is to using code page 1256, the Windows code page for Arabic, then translate the bytes using this conversion table (also available as a webpage):
/*000-015*/ 000 001 249 003 004 005 006 007 008 009 010 011 012 013 014 015
/*016-031*/ 016 017 018 019 022 023 024 025 026 027 028 029 030 031 254 255
/*032-047*/ 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047
/*048-063*/ 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063
/*064-079*/ 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079
/*080-095*/ 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095
/*096-111*/ 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111
/*112-127*/ 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
/*128-143*/ 128 132 174 159 134 141 142 143 144 145 146 190 148 149 155 156
/*144-159*/ 157 158 160 161 162 002 163 224 164 165 166 188 167 252 168 169
/*160-175*/ 171 172 154 176 177 178 179 021 180 181 182 183 184 173 185 186
/*176-191*/ 189 192 220 221 222 223 020 250 153 243 187 244 245 246 247 191
/*192-207*/ 248 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207
/*208-223*/ 208 209 210 211 212 213 214 170 215 216 217 218 219 225 226 227
/*224-239*/ 133 228 131 229 230 231 232 135 138 130 136 137 233 234 140 139
/*240-255*/ 235 236 237 238 147 239 240 175 241 151 242 150 129 251 152 253
var enc = Encoding.GetEncoding(1256);
var ara = "العربية";
var res = enc.GetBytes(ara);
// TODO: apply table
//...