When i add a new rule to my NSG, it takes some time until it becomes effective. This is confusing because on a few occasions, I thought my rules are not working and started to keep changing things.
Is there any way to find when a new NSG rule start to be effective?
If not, there any documentation explains average time delay until a new nsg rule become effective?
I checked by adding a NSG rule for the VM but couldn't find a time delay.
If you are facing the issue , you can diagnose if your security rule has become effective or not by following the steps mentioned in the below document.
Reference:
Diagnose a virtual machine network traffic filter problem | Microsoft Docs
Note: If you still face the issue , Please create a Support request to Azure , for assisted troubleshooting. You can go to (Help+support) and create a new technical support request.
Related
and subsequently obviously to read/take the topic. The problematic topic is published under BuiltinQosLibExp::Generic.KeepLastReliable.TransientLocal policy and the message is fired only once at the startup of the publisher application. Few things to consider:
Im not using this policy and taking the default policy configuration in code
dds::sub::qos::DataReaderQos tempQos = inSubScriber->default_datareader_qos();
m_EntitySpecReader = new dds::sub::DataReader<XXX_ICD::Entity_Specification_DT>(*inSubScriber, topicLocal, tempQos, m_EntitySpecListener);
from subscriber
The problem is not Firewall or some connection issue, as I know to receive other cyclic topics without any problem.
It is frustrating that I see this topic if Im trying to monitor either with rtiddsspy or RTI administration console.
Last bullet and most frustrating, when I actually felt stuck, is that I have a listener configured with all available callbacks and I thought to receive if not the data at least some callback clue regarding the possible mismatch, lost, something .... but it keeps silence no matter what Im trying to do :)
Will be more than happy to understand if somebody has an answer or potential direction to check :)
You are using the default QoS for your DataReader. This means that its Durability policy is VOLATILE. Even though the DataWriter is configured as TRANSIENT_LOCAL, it will not deliver "old" samples to your DataReader since it is not requesting those due to its volatile durability. In this context, "old" samples are samples that were written before the DataWriter discovered the DataReader.
Things should start working as expected when you configure your DataReader with a Durability policy as TRANSIENT_LOCAL as well.
If you instrumented a Listener on the DataReader, it should show you that a match has taken place though, or that it has failed. If you implemented both the on_subscription_matched and on_requested_incompatible_qos callbacks, then at least one of those two should fire if you have both applications started and if they are able to discover each other.
Since you discovered that the problem was a type mismatch, I wanted to show how the AdminConsole tool could have helped you finding that. Reproducing your issue, this is what it showed:
I am trying to learn istio. I was able to setup a simple traffic shifting in which 40 percent traffic goes to a particular version and the remaining 60 percent to other version. My doubt is can I make this weight(40-60) dynamic, based on
Percentage of error response from both the versions. The version with less error response faces more traffic and eventually 100 percent.
Or atleast, change with time, example 2 percent shift every hour.
Also, would this require me to do kubectl apply again and again.
For the first part, there are no features to make a VirtualService route based on error rates in Istio. The routing is based on the number of requests coming through the VirtualService.
Secondly, given that Kubernetes objects are persistent entities and the Istio's CRDs controlling the weight-based routing have its settings defined in the spec of the object (of which state must be provided by the user), it's unexpected that this configuration would change dynamically.
For your scenario, I would say that deploying a new version without knowing if it will error more than the previous one, and expecting them to error enough to decide which is to persist may not be the best approach.
I'd recommend using traffic mirroring for testing the production traffic in the new version, and from that, determinig if is worth deploying it using any existing/supported deployment strategy.
We’ve been using the Tokbox platform for several months now with a Javascript web-client as well as an Android phone client, where sessions and connections are managed by a Python server. While integration and bring-up went well on both ends (client and server), we continue to encounter problems with the in-session audio and video experience.
Sessions are always routed and always between two participants only, with much use of a collaborative editor.
The in-session experience is like a coin toss: we never know how it’s going to go, and that’s becoming a business threat.
Web-Client: A/V Resources
The most common problem is the acquisition of audio and/or video: at the beginning of a session, one or the other participants may have problems hearing or seeing the other. Allocating a new connection to establish new streams does not fix that, nor does restarting the browser.
Question: What’s the recommended way to detect possible resource locks (e.g. does another application hog the camera/microphone)?
Web-Client: Network
Bandwidth and packet loss are a challenge, for example this inspector graph:
Audio and video of both participants is all over the place, and while we can not control the network connections the web-client should be able to reliably give useful information.
Question: Other than continuous connection monitoring with getStats() and maybe the experimental navigator.connection property, how can the web-client monitor network connectivity?
Pre-Call Test
We recommend to customers to run a pre-call test and have implemented it on our site as well. However, results of that test often times do not reflect the in-session connectivity. Worse, a pre-call test may detect a low (no video) bandwidth while Skype works just fine.
Question: How can that be?
I'm a member of the TokBox development team. I remember you reported an issue with the Python SDK, thanks for that!
Web-Client: A/V Resources
Most acquisition issues are detected by the JS SDK and if they aren't then we'd really like to hear about it! Please report reproduction steps or affected session IDs to TokBox support (referencing this StackOverflow question): https://support.tokbox.com/hc/en-us/requests/new
Most acquisition errors appear as OT_HARDWARE_UNAVAILABLE or OT_MEDIA_ERR_ABORTED errors. Are you detecting and surfacing these errors to your users? There is also the special OT_CHROME_MICROPHONE_ACQUISITION_ERROR error which is due to a known issue with Chrome that has been mostly fixed since Chrome 63 (see https://bugs.chromium.org/p/webrtc/issues/detail?id=4799).
Web-Client: Network
Unfortunately this is one of the more difficult issues to troubleshoot. Yes, Subscriber#getStats() is the best tool we have at our disposal and is a wrapper around the native RTCPeerConnection#getStats() function. Unfortunately we don't have much control over the values returned by the native function and if you think our SDK is returning incorrect values when compared with values from RTCPeerConnection#getStats() then please let us know!
It would be worthwhile confirming whether the issue is reproducible in all browsers or only a particular one. If you have detailed data regarding the inaccuracy of the native RTCPeerConnection#getStats() function then we could work together to report it to the browser vendor(s).
Fortunately we have just released the new Publisher#getStats() function which lets you get the publisher side of the stats. This should help you narrow down the cause of a connectivity issue to either a publisher or subscriber side. Please let us know if this helps with tracking down these issues.
Pre-Call Test
Again, these tests are based on Subscriber#getStats() which in turn are based on RTCPeerConnection#getStats(), the accuracy of which is out of our hands, but we'd love any reproduction steps to either fix a bug in our client SDK or report a bug to the browser vendors.
Just to confirm though, when you say you've implemented a pre-call test in your site, did you use the official JavaScript network test module? https://github.com/opentok/opentok-network-test-js This is actually what's used by the TokBox pre-call test.
#Aiham, thanks for responding, I've been looking at the the new Publisher#getStats() you linked to (thank you!), so we too can give our users some way of visibly seeing the network conditions that might be affected the quality of their call (and who's causing it). However, it seems as though bytes / packets sent goes up sharply as the number of subscribers increases, even though we're in a routed session.
Am I wrong to expect the Publisher#getStats() statistics to stay fairly stable regardless of the number of subscribers then receiving that stream in a routed session? I expected the nature of a routed call to mean it's sent once to the OpenTok Media Servers, and the statistics would end there.
I would like a mod-security rule that takes a list of IP addresses from a text data file and if the Client IP does not match one of these, then rate limit requests to 200 requests-per-minute.
Don't use ModSecurity for this. It's not great at handling persistent variables between requests - which is needed for any type of rate limiting. The functionality is there, but because of of the disk based SDBM disk based storage it uses to implement this, this doesn't work under any real load. See this discussion on ModeSecurity mailing thread as one of many examples threads on this subject.
To me this will not really be an option in ModSecurity until some non-disk based storage is used, so best to keep an eye on this bug to see when that is implemented.
Instead look at fail2ban or some other firewall protection for this.
Hey there guys, I am a recent grad, and looking at a couple jobs I am applying for I see that I need to know things like runtime complexity (straight forward enough), caching (memcached!), and load balancing issues
(no idea on this!!)
So, what kind of load balancing issues and solutions should I try to learn about, or at least be vaguely familiar with for .net or java jobs ?
Googling around gives me things like network load balancing, but wouldn't that usually not be adminstrated by a software developer?
One thing I can think of is session management. By default, whenever you get a session ID, that session ID points to some in-memory data on the server. However, when you use load-balacing, there are multiple servers. What happens when data is stored in the session on machine 1, but for the next request the user is redirected to machine 2? His session data would be lost.
So, you'll have to make sure that either the user gets back to the same machine for every concurrent request ('sticky connection') or you do not use in-proc session state, but out-of-proc session state, where session data is stored in, for example, a database.
There is a concept of load distribution where requests are sprayed across a number of servers (usually with session affinity). Here there is no feedback on how busy any particular server may be, we just rely on statistical sharing of the load. You could view the WebSphere Http plugin in WAS ND as doing this. It actually works pretty well even for substantial web sites
Load balancing tries to be cleverer than that. Where some feedback on the relative load of the servers determines where new requests go. (even then session affinity tends to be treated as higher priority than balancing load). The WebSphere On Demand Router that was originally delivered in XD does this. If you read this article you will see the kind of algorithms used.
You can achieve balancing with network spraying devices, they could consult "agents" running in the servers which give feedback to the sprayer to give a basis for decisions where request should go. Hence even this Hardware-based approach can have a Software element. See Dynamic Feedback Protocol
network combinatorics, max- flow min-cut theorems and their use