How to Visualise large number of code in Pro-B - formal-methods

I have a problem with visualising large number of code in ProB.
This graph shows the login section of a server with (using Graphviz' dotty) of ProB, but there is no solution for large number of code to get the graph.
Please let me know your suggestions and ideas,

Related

NBeats does not work for long-term daily time series data. How can I process my data to correct this?

I am following the code as per this notebook: https://www.kaggle.com/code/stpeteishii/google-stock-prediction-darts/notebook
It's a really simple implementation of the NBeats model, and it takes into account 400 days of the Google stockprice. However, when I increase the database to take into account all 4000 days, the N-Beats forecast is completely thrown of and gives a completely incorrect forecast with a mape of over 100.
I understand that one possibility is due to the seasonal nature of the stockprice. I tried the Dicky Fueller test (not using the Darts library) and it is indeed seasonal data. However, when I try to inspect seasonality with the Darts library as follows:
from darts.utils.statistics import plot_acf, check_seasonality
plot_acf(train, m=365, alpha=0.05)
I get a max lag error.
Can someone please help me as to how to forecast stock market data of a 10 year daily timeseries using the darts library? Also, I'd be extremely grateful if someone could give me some additional tips when it comes to forecasting stock data with darts since there's not much info/tutorials available online.
I've gone through the entire docs and I've got a good hang of working with darts using the darts open-source databases, however none of the examples seem to work with stock market data.
Any help would be greatly appreciated.
Thanks!

Methods of labeling human muscles on tensorflow

I want to be able to label all of the muscles on an athletes body. I got a lot of the images that the athletes are almost in the same body pose but the issue that I am running into is that drawing a box around them makes them inaccurate as it ends up overlapping other muscles. Drawing exact lines around them is a bit difficult as they are a lot of smaller muscles and creates inconsistently over 20-30images. I was wondering if there is a way to feed in a human anatomy and then have tensorflow go in and label all of the muscles in given pictures.
Or I was wondering if you all had a different idea on approach this problem that I'm running into.
I don't have anybody else to ask and I've been researching this for awhile so if I missed or overlooked something please forgive me
The way i see is you need to combine with some prepossessing steps to normalize your target object in the image such as:
identify the human,
identify the pose or skeleton (which nowadays many open-source such as openpose-plus),
the pose estimation results can label the limbs, or part of the body from which you can do something either by hand-crafted image processing or other segmentation model.

How to disable summary for Tensorflow Estimator?

I'm using Tensorflow-GPU 1.8 API on Windows 10. For many projects I use the tf.Estimator's, which really work great. It takes care of a bunch of steps including writting summaries for Tensorboard. But right now the 'events.out.tfevents' file getting way to big and I am running into "out of space" errors. For that reason I want to disable the summary writting or at least reduce the amount of summaries written.
Going along with that mission I found out about the RunConfig you can pass over at construction of tf.Estimator. Apparently the parameter 'save_summary_steps' (which by default is 200) controls the way summaries are wrtitten out. Unfortunately changing this parameter seems to have no effect at all. It won't disable (using None value) the summary or reducing (choosing higher values, e.g. 3000) the file size of 'events.out.tfevents'.
I hope you guys can help me out here. Any help is appreciated.
Cheers,
Tobs.
I've observed the following behavior. It doesn't make sense to me so I hope we get a better answer:
When the input_fn gets data from tf.data.TFRecordDataset then the number of steps between saving events is the minimum of save_summary_steps and (number of training examples divided by batch size). That means it does it a minimum of once per epoch.
When the input_fn gets data from tf.TextLineReader, it follows save_summary_steps as you'd expect and I can give it a large value for infrequent updates.

Create subplots from interactive python plot

I have some LVIS Lidar data in hdf5 format.
The data has Lat and Long co-ordinates, so I have been able to visualise them on a map using Basemap:
f = h5py.File('ILVIS1B_GA2016_0304_R1701_043591.h5','r')
LONG = f['/LON0/']
LAT = f['/LAT0/']
X = LONG[...]
Y = LAT[...]
m = Basemap(projection='merc',llcrnrlat=-0.5,urcrnrlat=0.5,\
llcrnrlon=9,urcrnrlon=10,lat_ts=0.25,resolution='i')
m.drawcoastlines()
m.drawcountries()
parallels = np.arange(-9.,10.,0.5)
m.drawparallels(parallels,labels=[False,True,True,False])
meridians = np.arange(-1.,1.,0.5)
m.drawmeridians(meridians,labels=[True,False,False,True])
m.drawmapboundary(fill_color='white')
x,y = m(X, Y)
scatter = plt.scatter(x,y)
m.scatter(x,y)
plt.show()
This gets me this, where the orange bands are very dense points:
The hdf5 file also has the full waveform data for each mapped point (each datapoint is a reflection detected at the sensor, as a function of time) so that each of the orange points has data associated with it like:
Ultimately, I would like to be able to click on any of the orange points and for the subsequent waveform to be displayed. I have looked into interactive plots for this and have come across a number of libraries (mpl3d, plotly etc).
I'm having some trouble getting my head around some of these and how I can get my data into the examples - my python isn't up to this level. Does anyone have any advice on where to start? Which libraries would be best suited to this? A little help to understand the basics would be appreciated.
Apologies there is no direct question here, I'm just after some info from the knowledgable community.
The question seems to be: How do I tackle a task I have no clue how to solve?
Step 1: Search for a possible solution. It may happen that someone else has already solved your problem. This will mostly not be the case, but you may be lucky.
Step 2: Abstract the task. What would be the general problem that a lot of people might have and for which there might be a solution? Does it need to be hdf5 files? No. Is georeferencing important? Maybe, but one could neglect for the moment. Which requirements are really important, which not?
Step 3: Search again. You will have more success now for finding similar or related problems.
Step 4: Look at the tools in use. Make a list of possible tools and check against your requirements. Interactivity, Application or web-based, accuracy etc.
Step 5: Decide for one tool and go for it. Start with a general case study. Can I plot a map on the left and a graph on the right side using this tool? If not, find out why - maybe there is a general problem with this, maybe there is just an implementation detail missing. At this point you may ask a question about the case study problem, specifying the tool in use and providing the code that gives the problem. Do not think about your actual problem until this is solved.
Step 6: Proceed and try to add interactivity. Can I get something to happen when clicking? Again treat this independent of the actual problem. Search for solutions and if there none, ask a question about it.
Step 7: Proceed further up to the point where you're truely stuck. Now is the time to finally ask a question here, but with all the details that have brought you down to step 7 inside the question.

Using tensorflow to identify lego bricks?

having read this article about a guy who uses tensorflow to sort cucumber into nine different classes I was wondering if this type of process could be applied to a large number of classes. My idea would be to use it to identify Lego parts.
At the moment, a site like Bricklink describes more than 40,000 different parts so it would be a bit different than the cucumber example but I am wondering if it sounds suitable. There is no easy way to get hundreds of pictures for each part but does the following process sound feasible :
take pictures of a part ;
try to identify the part using tensorflow ;
if it does not identify the correct part, take more pictures and feed the neural network with them ;
go on with the next part.
That way, each time we encounter a new piece we "teach" the network its reference so that it can better be recognized the next time. Like that and after hundreds of iterations monitored by a human, could we imagine tensorflow to be able to recognize the parts? At least the most common ones?
My question might sound stupid but I am not into neural networks so any advice is welcome. At the moment I have not found any way to identify a lego part based on pictures and this "cucumber example" sounds promising so I am looking for some feedback.
Thanks.
You can read about the work of Jacques Mattheij, he actually uses a customized version of Xception1 running on https://keras.io/.
The introduction is Sorting 2 Metric Tons of Lego.
In Sorting 2 Tons of Lego, The software Side you can read:
The hard challenge to deal with next was to get a training set large
enough to make working with 1000+ classes possible. At first this
seemed like an insurmountable problem. I could not figure out how to
make enough images and to label them by hand in acceptable time, even
the most optimistic calculations had me working for 6 months or longer
full-time in order to make a data set that would allow the machine to
work with many classes of parts rather than just a couple.
In the end the solution was staring me in the face for at least a week
before I finally clued in: it doesn’t matter. All that matters is that
the machine labels its own images most of the time and then all I need
to do is correct its mistakes. As it gets better there will be fewer
mistakes. This very rapidly expanded the number of training images.
The first day I managed to hand-label about 500 parts. The next day
the machine added 2000 more, with about half of those labeled wrong.
The resulting 2500 parts where the basis for the next round of
training 3 days later, which resulted in 4000 more parts, 90% of which
were labeled right! So I only had to correct some 400 parts, rinse,
repeat… So, by the end of two weeks there was a dataset of 20K images,
all labeled correctly.
This is far from enough, some classes are severely under-represented
so I need to increase the number of images for those, perhaps I’ll
just run a single batch consisting of nothing but those parts through
the machine. No need for corrections, they’ll all be labeled
identically.
A recent update is Sorting 2 Tons of Lego, Many Questions, Results.
1CHOLLET, François. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint arXiv:1610.02357, 2016.
I have started this using IBM Watson's Visual Recognition.
I had six different bricks to be recognized on the transport belt background.
I am actually thinking about tensorflow, since I can have it running locally.
The codelab : TensorFlow for Poets, describes almost exactly what you want to achieve,
For a demo of the Watson version:
https://www.ibm.com/developerworks/community/blogs/ibmandgoogle/entry/Lego_bricks_recognition_with_Watosn_lego_and_raspberry_pi?lang=en