Python 2.. to 3.9 - load and reload class dynamically by name - dynamic

I have an old code I wrote a long time ago
It is supposed to load classes dynamically by parameters:
family and dclass are folders
then I execute that class with parameters x,y,dir,name
if class is already loaded, then I reload it
def load_device(self,x,y,name,family,dclass,dir,list=None):
try:
exec ("reload('.\\%s\\%s')"%(family,dclass))
except:
exec ("import %s"%dclass)
exec ("d=%s.%s(%s,%s,%s,\"%s\")"%(dclass,dclass,x,y,dir,name))
I already checked a couple of answers here and there, cant find anything similar
how do I do something similar in Python 3.9 ?
thanks

Related

Jmeter Beanshell: Accessing global lists of data

I'm using Jmeter to design a test that requires data to be randomly read from text files. To save memory, I have set up a "setUp Thread Group" with a BeanShell PreProcessor with the following:
//Imports
import org.apache.commons.io.FileUtils;
//Read data files
List items = FileUtils.readLines(new File(vars.get("DataFolder") + "/items.txt"));
//Store for future use
props.put("items", items);
I then attempt to read this in my other thread groups and am trying to access a random line in my text files with something like this:
(props.get("items")).get(new Random().nextInt((props.get("items")).size()))
However, this throws a "Typed variable declaration" error and I think it's because the get() method returns an object and I'm trying to invoke size() on it, since it's really a List. I'm not sure what to do here. My ultimate goal is to define some lists of data once to be used globally in my test so my tests don't have to store this data themselves.
Does anyone have any thoughts as to what might be wrong?
EDIT
I've also tried defining the variables in the setUp thread group as follows:
bsh.shared.items = items;
And then using them as this:
(bsh.shared.items).get(new Random().nextInt((bsh.shared.items).size()))
But that fails with the error "Method size() not found in class'bsh.Primitive'".
You were very close, just add casting to List so the interpreter will know what's the expected object:
log.info(((List)props.get("items")).get(new Random().nextInt((props.get("items")).size())));
Be aware that since JMeter 3.1 it is recommended to use Groovy for any form of scripting as:
Groovy performance is much better
Groovy supports more modern Java features while with Beanshell you're stuck at Java 5 level
Groovy has a plenty of JDK enhancements, i.e. File.readLines() function
So kindly find Groovy solution below:
In the first Thread Group:
props.put('items', new File(vars.get('DataFolder') + '/items.txt').readLines()
In the second Thread Group:
def items = props.get('items')
def randomLine = items.get(new Random().nextInt(items.size))

Hadoop: wrong classpath in map reduce job

I'm running a cloudera cluster in 3 virtual maschines and try to execute hbase bulk load via a map reduce job. But I got always the error:
error: Class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat not found
So, it seems that the map process doesnt find the class. So I tried this:
1) add the hbase.jar to the HADOOP_CLASSPATH on every node
2) adding TableMapReduceUtil.addDependencyJars(job) / TableMapReduceUtil.addDependencyJars(myConf, HFileOutputFormat.class) to my source code
nothing worked. I have absolute no idea why the class is not found, because the jar/class is definitely available in the classpath.
If I take a look into the job.xml I see the following entry:
name=tmpjars value=file:/C:/Users/Thomas/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5-cdh4.3.0/zookeeper-3.4.5-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/org/apache/hbase/hbase/0.94.6-cdh4.3.0/hbase-0.94.6-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/org/apache/hadoop/hadoop-core/2.0.0-mr1-cdh4.3.0/hadoop-core-2.0.0-mr1-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar,file:/C:/Users/Thomas/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar
This seems a little bit odd to me, these are my local jars on the windows system. Maybe this should be the hdfs jars? If yes, how can I change the values for "tmpjars"?
Here is the java code I try to execute:
configuration = new Configuration(false);
configuration.set("mapred.job.tracker", "192.168.2.41:8021");
configuration.set("fs.defaultFS", "hdfs://192.168.2.41:8020/");
configuration.set("hbase.zookeeper.quorum", "192.168.2.41");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
Job job = new Job(configuration, "HBase Bulk Import for "
+ tablename);
job.setJarByClass(HBaseKVMapper.class);
job.setMapperClass(HBaseKVMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(KeyValue.class);
job.setOutputFormatClass(HFileOutputFormat.class);
job.setPartitionerClass(TotalOrderPartitioner.class);
job.setInputFormatClass(TextInputFormat.class);
HFileOutputFormat.configureIncrementalLoad(job, hTable);
FileInputFormat.addInputPath(job, new Path("myfile1"));
FileOutputFormat.setOutputPath(job, new Path("myfile2"));
job.waitForCompletion(true);
LoadIncrementalHFiles loader = new LoadIncrementalHFiles(
configuration);
loader.doBulkLoad(new Path("myFile3"), hTable);
EDIT:
I tried a little bit more and its totaly strange. I add the following line to the java code:
job.setJarByClass(HFileOutputFormat.class);
after I executed this, the error is gone, but another class not found exception appear:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class mypackage.bulkLoad.HBaseKVMapper not found
HBaseKVMapper is my custom Mapper Class I want to execute. So, I tried to add it with "job.setJarByClass(HBaseKVMapper.class)", but it doesnt work since its its only a class file and no jar. So I generated a Jarfile including HBaseKVMapper.class. After that, I executed it again and now got the HFileOutputFormat.class not found exception again.
After debugging a little bit, I found out that the setJarByClass Methode only copies the local jar file to .staging/job_#number/job.jar on HDFS. So, this setJarByClass() Method will only work for one jar file because it overwrites the job.jar after executing setJarByClass() again with another jar.
While searching for the eroor I saw the following strcuture in the the job staging direcotry:
and inside the libjars direcotry I saw the relevant jar files
so, the hbase jar is inside the libjars directory but the jobtracker doesn't use this it for executing the job. Why?
I would try using Cloudera Manager (free version) as it takes care of these issues for you. Otherwise note the following:
Both your own classes and the HBase Class HFileOutputFormat need to be available on the classpath locally and remotely.
Submitting the job
Meaning getting the classpath right locally for when your driver runs:
$ env HADOOP_CLASSPATH=$(hbase classpath) hadoop jar path/to/jar class....
On the server
In your hadoop-env.sh
export HADOOP_CLASSPATH=$(hbase claspath)
or use
TableMapReduceUtil.addDependencyJars
I found a "hacked" solution which worked for me, but I'm not happy with it because it's not really practicable.
My "hacked" solution:
create one big Jar with all necessary class files, I called it "big.jar" and add it to the local (eclipse) classpath
add the line: job.setJarByClass(MyMapperClass.class) ... the MyMapperClass has to be in the big.jar
When I execute this the big.jar will be copied for every job to the filesystem. No errors anymore. The problem is, that the jar is 80mb in size and have to be copied every time.
If anywone knows a better way I would be tahnkful if he could tell me how.
EDIT:
Now I try to execute jobs with Apache Pig and have exactly the same problem. My hacked soultion doesn't work in this case because pig creats the jobs automaticly. Here is the pig error:
java.lang.ClassNotFoundException: Class org.apache.hadoop.hbase.mapreduce.TableSplit not found

extra-paths not added to python path with zc.recipe.testrunner

I am trying to run tests by adding a version of tornado downloaded from github.com in the sys.path.
[tests]
recipe = zc.recipe.testrunner
extra-paths = ${buildout:directory}/parts/tornado/
defaults = ['--auto-color', '--auto-progress', '-v']
But when I run bin/tests I get the following error :
ImportError: No module named tornado
Am I not understanding how to use extra-paths ?
Martin
Have you tried looking into generated bin/tests script if it contains your path? It will tell definitely if your buildout.cfg is correct or not. Maybe problem is elsewhere. Because it seem that your code is ok.
If you happen to regularly include various branches from git/mercurial or elsewhere to buildout, you might be interested in mr.developer. mr.developer can download and add package to develop =. You wont need to set extra-path in every section.

Magento - Trouble with setting up Model Read Adapter

I was following through on Alan Storm's tutorial on Magento's Model and ORM basics and I've run into a bit of a problem. When I get to the portion where I load from the Model for the first time I get this error "Fatal error: Call to a member function load() on a non-object...". I've reset everything already and tried again from scratch but I still get the same problem. My code looks like this:
$params = $this->getRequest()->getParams();
$blogpost = Mage::getModel('weblog/blogpost');
var_dump($blogpost);
echo("Loading the blogpost with an ID of ".$params['id']);
$blogpost->load($params['id']);
As you can see I dumped the contents of $blogpost and it shows that it is just a boolean false. My guess is that there's either a problem with the connection to the database or, for some reason, the code for Mage::getModel() didn't get installed correctly.
EDIT - Adding Code
There's so many that I just decided to pastebin them lol
app/code/local/Ahathaway/Weblog/controllers/IndexController.php
app/code/local/Ahathaway/Weblog/etc/config.xml
app/code/local/Ahathaway/Weblog/Model/Blogpost.php
app/etc/modules/Ahathaway_Weblog.xml
Your Model/Blogpost.php file should actually be Model/Mysql4/Blogpost.php, and you are missing the real Model/Blogpost.php.
My guess is that Mage cannot find your model class. Double check the module/model name and also verify if the model is in a correct place in the filesystem (it should be in app/code/local/Weblog/Model/Blogpost.php).
You also need to check if your config.xml correctly defines your model classes. It would be best if you could past your config.xml and your model class...
A quick glance reveals you're missing the model resource. Go back to the section around the following code example
File: app/code/local/Alanstormdotcom/Weblog/Model/Mysql4/Blogpost.php
class Alanstormdotcom_Weblog_Model_Mysql4_Blogpost extends Mage_Core_Model_Mysql4_Abstract{
protected function _construct()
{
$this->_init('weblog/blogpost', 'blogpost_id');
}
}

getting result from a function running "deferToThread"

I have recently started working on twisted not much familiar with its functions.I have a problem related to "deferToThread" method...my code is here to use this method
from twisted.internet.threads import deferToThread
from twisted.internet import reactor
results=[]
class Tool(object):
def exectool(self,tool):
# print "Test Class Exec tool running..........."
exec tool
return
def getResult(self,tool):
return results.append(deferToThread(self.exectool, tool))
to=Tool()
to.getResult(tools)
f=open(temp).read()
obj_tool=compile(f, 'a_filename', 'exec')
[ at 0x8ce7020, file "a_filename", line 1>, at 0x8cd4e30, file "a_filename", line 2>]
I am passing tools one by one in getResults() method it executs successfully & prints the results what script written in the file objects.
I have to store the result of tools executing in some variable so that I can save it in database.How to achieve this cause when i call re=to.getResult(tools) and print "re" it prints none.
I HAVE TO STORE ITS RESULTS IN DATABASE? IS THERE SOMETHING I CAN DO?
thanx in advance
There are two problems here.
First, deferToThread will not work if you never start the reactor. Hopefully this code snippet was actually extracted from a larger Twisted-using application where the reactor is running, so that won't be an actual problem for you. But you shouldn't expect this snippet to work unless you add a reactor.run() call to it.
Second, deferToThread returns a Deferred. The Deferred fires with the result of the callable you passed in. This is covered in the API documentation. Many APIs in Twisted return a Deferred, so you might want to read the documentation covering them. Once you understand how they work and how to use them, lots of things should be quite a bit easier.