I have run the quick start tutorial of SUMO and it is working fine (https://sumo.dlr.de/docs/Tutorials/quick_start.html).
I want to understand the purpose of Internal edge as it is not touched in the quick start tutorial, though mentioned in the tutorial code on github:
(https://github.com/eclipse/sumo/blob/master/tests/complex/tutorial/quickstart/data/quickstart.net.xml).
I have read the internal edge details: https://sumo.dlr.de/docs/Networks/SUMO_Road_Networks.html#internal_edges
My question is:
What is the benefit of using internal edge?
Do we have to define internal edges manually as I couldn't find option in netedit (i have latest version)
Is it always required to define internal edge.
Thanking you in advance.
I finally received an answer from sumo's mailing list:
Please check if the option „internal-links” in netedit is enable or not. You can do that by going to “processing” in the tool bar, then “Options” - > “junction”. The option “no internal links” should NOT be selected. Below you can find two images with the steps. After you have done this, try to save the network again and the internal edges should be added.
The github link you have sent (https://github.com/eclipse/sumo/blob/master/tests/complex/tutorial/quickstart/data/quickstart.net.xml) is a sumo Network file. In this file you can find the elements of the network: nodes, edges, etc. See SUMO_Road_Networks.html (https://sumo.dlr.de/docs/Networks/SUMO_Road_Networks.html) for more information about sumo networks.
Your two question are answered in the link I sent you (See internal_links (https://sumo.dlr.de/docs/Simulation/Intersections.html#internal_links) for information about simulating with/without internal edges or links). Internal links and internal edges are the same thing.
Regards,
Related
Here what am looking for,
Am trying to host my own vector tile server,
Here is an image for the kind of Map I want on some pages it is named as a "positron" not opensourced, even a similar kind of it can also work with me.
Reference link for the style.json that we found Positron Style Json
Inside style.json they have induced links of MapTiler Keys that were open-sourced but they removed this in February, there is no alternative we found on how we should run this on frontend.
style.json snippet
"sources": {
"openmaptiles": {
"type": "vector",
"url": "https://api.maptiler.com/tiles/v3/tiles.json?key={key}"
}
},
"sprite": "https://openmaptiles.github.io/positron-gl-style/sprite",
"glyphs": "https://api.maptiler.com/fonts/{fontstack}/{range}.pbf?key={key}",
Initially, when I started I was not much aware of the difference between vector tile and raster tiles
I followed this link and build the serverOSM Tile Server, it was working very good but it was native and I was looking for the kind of above design I have linked above. The native raster tiles look like this OSM Tile View, we are not actually looking for this
For the installation of PBF file we followed this planet.osm link and installed them with Postgress Database Planet OSM file used
Soon we realized we need to host a vector tile server rather than a raster one as they are providing images and we are not looking for this kind of design. As there is no-installation I found for the vector tile server on the main switch2OSM maps website.
I went through this link OSM Vector Tile Server Now when we are installing them ReadMe file "tessera" they are deprecated and also removed their support from the back. Now even the raster Tile server has stopped working.
Now I don't know what mistake am doing or what step I should take now, our usage is cumbersome and we want to host our own vector tile server to reduce the cost.
Any reference or guidance will be really appreciated.
Note: Tech Stack we are using
Frontend: VueJS
Backend: GeoDjango
if anyone gets stuck at the same position where have been with switch2OSM and looking to build their own vector tile server.
Please use this link named OpenMapTiles
I wasn't aware at the start therefore it took some time to figure it out.
Also thanks to OpenStreetMaps Telegram Community.
I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data
I would like to take a screenshot of a web site using WebGL. I don't have to use GPU to open that site. Using emulation is enough for me.
At the beginning, I already tried headless-chrome to do this. That can take screenshot of ordinal web sites. But, It not works for WebGL canvases.
I think one of possibility is using OSMesa or something to emulate OpenGL.
I have used all of my strategy for overcoming this. Is this actually possible to do?
If yes, please tell me how to do that. If no, I would like to know why.
Thanks.
Yes it is possible!
You need the right combination of:
headless-chromium binary
libosmesa.os binary (in same directory)
launch chrome headless with the right flags, such as (see link for full details): ['--use-gl=osmesa', '--enable-webgl', '--ignore-gpu-blacklist', '--homedir=/tmp', '--single-process', '--data-path=/tmp/data-path', '--disk-cache-dir=/tmp/cache-dir']
This thread on the serverless-chrome github project discusses the issue and provides some binaries which I have used to capture screenshots of WebGL content on AWS Lambda using Page.captureScreenshot().
https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-416494572
(See comment by #apalchys on 28th August)
This particular example uses SwiftShader which seems to be the preferred option going forward.
Note, however, that I was unable to create PDFs using Page.printToPDF() using this version - WebGL content just appears blank/white. However, I was able to also get Page.printToPDF() using an earlier version which uses osmesa, see https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-371199530
I have recently downloaded MT4 & MT5. In both of these platforms where the historical data section should be ( in the dropdown of the tools section ), it is missing in both and I cannot seem to find a way to access this function.
It just doesn't seem to be in the platform at all?
My intention is to carry on with my research on backtesting data.
Step 1) define the problem:
Given the text above, it seems that your MetaTrader Terminal downloads have been installed, but do not allow you to inspect the (Menu)->[Tools]->[History Center]. If this is the case, check this with Support personnel from the Broker company you have downloaded these platforms from, as there are possible ways, how some Brokers may adapt the platform, including the objected behaviour.
Step 2) explain the target behaviour:
Your initial post has mentioned that your intention is to gain access to data due to "research on backtesting data".
If this is the valid target, your goal can be achieved also by taking an MT4 platform from any other Broker, be it with or without data, and next, importing { PERIOD_M1 | PERIOD_M5 | ... }-records, via an MT4 [History Center] F2 import interface. Just enough to follow the product documentation.
If your Quantitative Modelling requires tick-based data with a Market-Access Venue "fidelity", there was no such a way so far available for an end-user to import and resample some externally collected tick-data for MetaTrader Terminal platform.
Step 3) demonstrate your research efforts + steps already performed:
This community will always welcome active members, rather than "shouting" things like "Any idea?" or "ASAP" and "I need this and that, help me!".
Showing efforts, that have been spent on solving the root cause are warmly welcome, as Stack Overflow strongly encourages members to post high quality questions, formulated in a Minimum Complete Verifiable Example of code, that can re-run + re-produce the problem under test. Using PrintScreens for UI-states are ok for your type of issue, to show the blocking state and explain the context.
I realize GPUImage has been well documented and there's a lot of instructions on how to use it on the main github page. However, it fails to explain what a filter chain is - what's addTarget? What's missing is a simple enough diagram showing what needs to be added to what. Is it always GPUImageView (source?) -> add target -> [filter]? I'm sorry if this sounds daft, but I fail to follow the correct sequence given there are so many ways of using it. To me, it sounds like you're connecting it the other way round (such as saying: Connect the socket to the TV). Why not add filter to the source? I'm trying to use it but I get lost in all the addTargets. Thanks!
You can think of it as a series of inputs and outputs. Look in the GPUImage framework project to see which are inputs (typically filters) and which are outputs (imageview, moviewriter, etc..). Every target effects the next target in the chain.
Example:
GPUImageMovie -> GPUImageSepiaFilter -> GPUImageMovieWriter
A movie will be sent to the sepia filter that will perform its job, the movie with a sepia filter applied will be sent to the movie writer, then the movie writer will export a movie with a sepia filter applied.
To help visualize what's going on, any node editor program typically uses this scheme. Think of calling addTarget: as one of the connections in the attached image.
A google image search for Node Editor will give you plenty of other image to help picture what adding targets does.