pysnmp - how to use compiled mibs during agent implementation - mib

the snmp agent implementation examples provided in pysnmp don't really leverage the mib.py file generated by compiling a mib. Is it possible to use this file to simplify agent implementation? Is such an example available, for a table. thanks!

You are right, existing mibdump.py tool is primarily designed for manager-side MIB compilation. However compiled MIB is still useful or even sometimes crucial for agent implementation.
For simple scalars you can mass-replace MibScalar classes with MibScalarInstance ones. And add an extra trailing .0 to their OID. For example this line:
sysDescr = MibScalar((1, 3, 6, 1, 2, 1, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readonly")
would change like this:
sysDescr = MibScalarInstance((1, 3, 6, 1, 2, 1, 1, 1, 0), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readonly")
For SNMP tables it's way more tricker because there can be several cases. If it's a static table which never changes its size, then you can basically replace MibTableColumn with MibScalarInstance plus append the index part of the OID. For example this line:
sysORID = MibTableColumn((1, 3, 6, 1, 2, 1, 1, 9, 1, 2), ObjectIdentifier()).setMaxAccess("readonly")
would look like this (note index 12345):
sysORID = MibScalarInstance((1, 3, 6, 1, 2, 1, 1, 9, 1, 2, 12345), ObjectIdentifier()).setMaxAccess("readonly")
The rest of MibTable* classes can be removed from the MIB.py.
For dynamic tables that change their shape either because SNMP agent or SNMP manager modify them, you might need to preserve all the MibTable* classes and extend/customize the MibTableColumn class to make it actually managing your backend resources in response to SNMP calls.
A hopefully relevant example.

Related

How to reduce Selenium requests (traffic) as much as possible? (Less traffic on residential proxy)

I am writing scrapers using residential proxies(quite expensive), which I noticed the traffic was quite heavy and normally Selenium sends more than one request for a single URL. I've disabled as much as I can, I am wondering if there's still anything that I can do to reduce the amount of traffic in total. Thanks.images
prefs = {
"profile.managed_default_content_settings.images": 2,
"profile.default_content_setting_values.javascript": 2,
"profile.managed_default_content_settings.stylesheets": 2,
"profile.managed_default_content_settings.plugins": 2,
"profile.managed_default_content_settings.popups": 2,
'disk-cache-size': 4096,
"profile.managed_default_content_settings.media_stream": 2,
# "profile.managed_default_content_settings.cookies": 2,
# "profile.default_content_setting_values.notifications": 2,
"profile.managed_default_content_settings.geolocation": 2,
# "download.default_directory": "d:/temp",
# "plugins.always_open_pdf_externally": True,
}
self.chrome_options.add_experimental_option("prefs", prefs)
I've tried to disable as many chrome functions as I could, images, javascripts,stylesheets, etc as shown in the pic.

Test driven development Optaplanner

In the library Optaplanner, in the file "CloudBalancingScoreConstraintTest.java" there is the following line of code: scoreVerifier.assertHardWeight("requiredCpuPowerTotal", -570, solution). How was calculated the expected weight "-570"? this was known before creating the classes (CloudBalance.java, CloudComputer.java) in a test driven development approach or after the creation of the classes?
TLDR: ignore CloudBalancingScoreConstraintTest and look at CloudBalancingConstraintProviderTest instead.
Long explanation:
CloudBalancing currently still defaults to scoreDrl. It also has an alternative implementation with ConstrainStreams: CloudBalancingConstraintProvider. ConstraintStreams are in many ways better than scoreDRL, as of OptaPlanner 8.4.0 equally fast and have 99% feature parity with DRL. Once that's 100%, all examples will use ConstraintStreams by default.
So why is it -570? Because ScoreVerifier checks all constraints. So add one constraint and you have to adjust all your tests. Very painful. Not TDD.
What's the fix? Use ConstraintVerifier. ConstraintVerifier is ScoreVerifier++. ConstraintVerifier tests the matchWeight of one constraint.
`constraintWeight * matchWeight * (+1 for reward | -1 for penalize) = score impact`
It even ignores the constraintWeight of the constraint, which is a blessing once the business stakeholders start tweaking the constraint weights. Additionally, it's far less verbose to use (no solution instance needed). What's the catch? It only works for ConstraintStreams.
An example:
#Test
public void requiredCpuPowerTotal() {
CloudComputer computer1 = new CloudComputer(1, 1, 1, 1, 2);
CloudComputer computer2 = new CloudComputer(2, 2, 2, 2, 4);
CloudProcess unassignedProcess = new CloudProcess(0, 1, 1, 1);
// Total = 2, available = 1.
CloudProcess process1 = new CloudProcess(1, 1, 1, 1);
process1.setComputer(computer1);
CloudProcess process2 = new CloudProcess(2, 1, 1, 1);
process2.setComputer(computer1);
// Total = 1, available = 2.
CloudProcess process3 = new CloudProcess(3, 1, 1, 1);
process3.setComputer(computer2);
constraintVerifier.verifyThat(CloudBalancingConstraintProvider::requiredCpuPowerTotal)
.given(unassignedProcess, process1, process2, process3)
.penalizesBy(1); // Only the first computer.
}
To learn more, watch Lukas's OptaPlanner Test driven development video.

How to control errors in mule 4?

There is a scheduler API that I have created, inside that there is a for each which is taking an array as [1, 2, 3, 4, 5, 6]
Based on the value from the array I am using a choice router.
Now if any error occurs when payload == 2, how should I control the error handling such that after catching the error the control should go to payload == 3
No batching is used in this API.
Use a a Try scope with error handling inside the choice branch for payload == 2 to handle the error with on-continue to capture the error.
Note that there is nothing ache specific in your question or in this solution.

How to decode response from Glucose Measurement Characteristic (Bluetooth)

I have an react-native application and im trying to get Glucose Measurements from Accu-Chek Guide device.
I have limited knowledge on BLE and this stackoverflow question helped me a lot to understand bluetooth and retrieving glucose measurements.
Reading from a Notify Characteristic (Ionic - Bluetooth)
so, what im doing in my code:
1, connect to BLE Peripheral
2, monitor characteristic Glucose Feature & Record Access Control Point
3, send 0x0101 (Report stored records | All records) to Record Access Control Point
4, decode response
So far i have 1-3 working but i dont know how to decode the response from Glucose Feature:
Notification response of Glucose Measurement
[27, 4, 0, 195, 164, 7, 7, 14, 11, 6, 5, 90, 2, 119, 194, 176, 195, 184, 0, 0]
Notification of Record Access Control Point
[6, 0, 1, 1]
I am assuming this is the Bluetooth SIG adopted profile of Continuous Glucose Monitoring Service (CGMS) the specification of which is available from:
https://www.bluetooth.com/specifications/gatt/
Looking at the XML for the Glucose Measurement characteristic, gives more detail on how the data is structured.
https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.glucose_measurement.xml
There will be a little bit of work to do to unpack the data.
For example, the first byte stores the information for the first field named flags. However you need to look at the first 5 bits for those various flags.
The next field is "Sequence Number" which is a uint16 so will take two bytes. Worth noting here that Bluetooth typically uses little endian.
Next is the Base Time field which refers to https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.date_time.xml which will take the next 7 bytes.
Because some of the 9 fields in the characteristic take more than one byte, this is why you are seeing 20 bytes for the 9 fields.

Gmail API generating time stamp report

I am curious if I could get a report of messages sent and received that includes time stamps and email addresses.
I looked at the Gmail API documentation and I did not see anything that directly mentioned anything like that.
Thank you.
Here's the relevant function maybe u can see it http://imapclient.readthedocs.org/en/latest/index.html#imapclient.IMAPClient.fetch
>> c.fetch([3293, 3230], ['INTERNALDATE', 'FLAGS'])
{3230: {b'FLAGS': (b'\Seen',),
b'INTERNALDATE': datetime.datetime(2011, 1, 30, 13, 32, 9),
b'SEQ': 84},
3293: {b'FLAGS': (),
b'INTERNALDATE': datetime.datetime(2011, 2, 24, 19, 30, 36),
b'SEQ': 110}}