I forked TurtleCoin and customized some config including addresses pefix. I compiled it on Windows using Visual Studio. Wallet and daemon work fine but miner do not work. When I start miner and add address get this message:
Address is not valid: The address does not have the correct prefix corresponding to this coin - it appears to be an address for another cryptocurrency.
Where can I define addresses pefix for miner?
I followed these steps: https://github.com/turtlecoin/turtlecoin#windows
Interesting, looking over the codebase this error is referred to as ADDRESS_WRONG_PREFIX. Which is returned by an the function validateAddresses which the mining code calls, here.
This function simply checks that the prefix matches WalletConfig::addressPrefix, meaning that as long as you changed the address prefix in the wallet config located here. It should work.. I hope this is helpful.
I want to run one of the tensorflow object detection evaluation protocols [1]. I am new with it, and from the webpage I cannot understand where I would have to add the metrics_set configuration. Ex:
EvalConfig.metrics_set='pascal_voc_detection_metrics'
I tried changing the value in the eval.proto file, where metrics_set is set to the value 8. Does anyone know if this is the right place to change it? I saw no effect on changing this value. And what does the "8" mean? In addition, what is the output I am to expect?
Update:
I answered one of my questions: the place where I should change the setting is not the eval.proto, but in the configuration file:
eval_config: {
metrics_set: 'weighted_pascal_voc_detection_metrics'
}
However, I still do not understand where I am to see the effect of this - I still have the other questions unanswered.
[1]
https://github.com/tensorflow/models/blob/fd7b6887fb294e90356d5664724083d1f61671ef/research/object_detection/g3doc/evaluation_protocols.md
I think "8" is just a placeholder - it's the 8th entry in the eval.proto file.
When you run an evaluation job (eval.py), this metrics_set you specify is used as the protocol by which to compute the metrics on the data set specified in eval_input_reader. The results are output to an events summary file prefixed with events.out.tfevents, which you can visualize using TensorBoard or event_accumulator from tensorboard.backend.event_processing. Different metrics set would vary slightly but I haven't tried them all to comment - you'll have to look into the details of each protocol.
In Bar.pm, I declare a class with an authority (author) and a version:
class Bar:auth<Camelia>:ver<4.8.12> {
}
If I use it in a program, how do I see which version of a module I'm using, who wrote it, and how the module loader found it? As always, links to documentation are important.
This question was also asked on perl6-users but died before a satisfactory answer (or links to docs) appeared.
Another wrinkle in this problem is that many people aren't adding that information to their class or module definitions. It shows up in the META.json file but not the code.
(Probably not a satisfying answer, because the facts of the matter are not very satisfying, especially regarding the state of the documentation, but here it goes...)
If the module or class was versioned directly in the source code à la class Bar:auth<Camelia>:ver<4.8.12>, then any code that imports it can introspect it:
use Bar;
say Bar.^ver; # v4.8.12
say Bar.^auth; # Camelia
# ...which is short for:
say Bar.HOW.ver(Bar); # v4.8.12
say Bar.HOW.auth(Bar); # Camelia
The ver and auth methods are provided by:
Metamodel::ClassHOW (although that documentation page doesn't mention them yet)
Metamodel::ModuleHOW (although that documentation page doesn't exist at all yet)
Unfortunately, I don't think the meta-object currently provides a way to get at the source path of the module/class.
By manually going through the steps that use and require take to load compilation units, you can at least get at the prefix path (i.e. which location from $PERL6LIB or use lib or -I etc. it was loaded from):
my $comp-spec = CompUnit::DependencySpecification.new: short-name => 'Bar';
my $comp-unit = $*REPO.resolve: $comp-spec;
my $comp-repo = $comp-unit.repo;
say $comp-repo.path-spec; # file#/home/smls/dev/lib
say $comp-repo.prefix; # "/home/smls/dev/lib".IO
$comp-unit is an object of type CompUnit.
$comp-repo is a CompUnit::Repository::FileSystem.
Both documentations pages don't exist yet, and $*REPO is only briefly mentioned in the list of dynamic variables.
If the module is part of a properly set-up distribution, you can get at the meta-info defined in its META6.json (as posted by Lloyd Fournier in the mailing list thread you mentioned):
if try $comp-unit.distribution.meta -> %dist-meta {
say %dist-meta<ver>;
say %dist-meta<auth>;
say %dist-meta<license>;
}
I'm working on integrating an experiment in psychopy with the eyelink eyetracking system. The way to do this seems to be through pylink. Unfortunately I'm really unfamiliar with pylink and I was hoping there was a sample of an experiment that combines the two. I haven't been able to find one. If anyone would be able to share an example or point me towards a more accessible manual than the pylink api that sr-research provides I'd be really grateful.
Thanks!
I am glad you found your solution. I have not used iohub, but we do use psychopy and an eyelink and therefore some of the following code may be of use to others who wish to invoke more direct communication. Note that our computers use Archlinux. If none of the following makes any sense to you, don't worry about it, but maybe it will help others who are stumbling along the same path we are.
Communication between experimental machine and eye tracker machine
First, you have to establish communication with the eyelink. If your experimental machine is turned on and plugged into a live Eyelink computer then on linux you have to first set your ethernet card up, and then set the default address that Eyelink uses (this also works for the Eyelink 1000 - they kept the same address). Note your ethernet will probably have a different name than enp4s0. Try simply with ip link and look for something similar. NB: these commands are being typed into a terminal.
#To set up connection with Eyelink II computer:
#ip link set enp4s0 up
#ip addr add 100.1.1.2/24 dev enp4s0
Eyetracker functions
We have found it convenient to write some functions for talking to the Eyelink computer. For example:
Initialize Eyetracker
sp refers to the tuple of screenx, screeny sizes.
def eyeTrkInit (sp):
el = pl.EyeLink()
el.sendCommand("screen_pixel_coords = 0 0 %d %d" %sp)
el.sendMessage("DISPLAY_COORDS 0 0 %d %d" %sp)
el.sendCommand("select_parser_configuration 0")
el.sendCommand("scene_camera_gazemap = NO")
el.sendCommand("pupil_size_diameter = %s"%("YES"))
return(el)
NB: the pl function comes from import pylink as pl. Also, note that there is another python library called pylink that you can find on line. It is probably not the one you want. Go through the Eyelink forum and get pylink from there. It is old, but it still works.
Calibrate Eyetracker
el is the name of the eyetracker object initialized above. sp screen size, and cd is color depth, e.g. 32.
def eyeTrkCalib (el,sp,cd):
pl.openGraphics(sp,cd)
pl.setCalibrationColors((255,255,255),(0,0,0))
pl.setTargetSize(int(sp[0]/70), int(sp[1]/300))
pl.setCalibrationSounds("","","")
pl.setDriftCorrectSounds("","off","off")
el.doTrackerSetup()
pl.closeGraphics()
#el.setOfflineMode()
Open datafile
You can talk to the eye tracker and do things like opening a file
def eyeTrkOpenEDF (dfn,el):
el.openDataFile(dfn + '.EDF')
Drift correction
Or drift correct
def driftCor(el,sp,cd):
blockLabel=psychopy.visual.TextStim(expWin,text="Press the space bar to begin drift correction",pos=[0,0], color="white", bold=True,alignHoriz="center",height=0.5)
notdone=True
while notdone:
blockLabel.draw()
expWin.flip()
if keyState[key.SPACE] == True:
eyeTrkCalib(el,sp,cd)
expWin.winHandle.activate()
keyState[key.SPACE] = False
notdone=False
Sending and getting messages.
There are a number of built-in variables you can set, or you can add your own. Here is an example of sending a message from your python program to the eyelink
eyelink.sendMessage("TRIALID "+str(trialnum))
eyelink.startRecording(1,1,1,1)
eyelink.sendMessage("FIX1")
tFix1On=expClock.getTime()
Gaze contingent programming
Here is a portion of some code that uses the eyelink's most recent sample in the logic of the experimental program.
while notdone:
if recalib==True:
dict['recalib']=True
eyelink.sendMessage("RECALIB END")
eyelink.startRecording(1,1,1,1)
recalib=False
eventType=eyelink.getNextData()
if eventType==pl.STARTFIX or eventType==pl.FIXUPDATE or eventType==pl.ENDFIX:
sample=eyelink.getNewestSample()
if sample != None:
if sample.isRightSample():
gazePos = sample.getRightEye().getGaze()
if sample.isLeftSample():
gazePos = sample.getLeftEye().getGaze()
gazePosCorFix = [gazePos[0]-scrx/2,-(gazePos[1]-scry/2)]
posPix = posToPix(fixation)
eucDistFix = sqrt((gazePosCorFix[0]-posPix[0])**2+(gazePosCorFix[1]-posPix[1])**2)
if eucDistFix < tolFix:
core.wait(timeFix1)
notdone=False
eyelink.resetData()
break
Happy Hacking.
rather than PyLink, you might want to look into using the ioHub system within PsychoPy. This is a more general-purpose eye tracking system that also allows for saving data in a common format (integrated with PsychoPy events), and provides tools for data analysis and visualisation.
ioHUb is built to be agnostic to the particular eye tracker you are using. You just need to create a configuration file specific to your EyeLink system, and thereafter use the generic functions ioHiv provides for calibration, accessing gaze data in real-time, and so on.
There are some teaching resources accessible here: http://www.psychopy.org/resources/ECEM_Python_materials.zip
For future readers, I wanted to share my library for combining pylink and psychopy. I've recently updated it to work with python 3. It provides simple to use, high level functions.
https://github.com/colinquirk/templateexperiments/tree/master/eyelinker
You could also work at a lower level with the PsychoPyCustomDisplay class (see the pylink docs for more info about EyeLinkCustomDisplay).
For an example of it in use, see:
https://github.com/colinquirk/ChangeDetectionEyeTracking
(At the time of writing, this experiment code is not yet python 3 ready, but it should still be a useful example.)
The repo also includes other modules for creating experiments and recording EEG data, but they are not necessary if you are just interested in the eyelinker code.
When I add to my config
$cfg['Servers'][$i]['ssl'] = TRUE;
trying to connect gives
phpMyAdmin - Error #2002 -
-- The server is not responding (or the local server's socket is not correctly configured)
I suspect this is because the mysql server requires
--ssl-ca=... --ssl-cert=... --ssl-key=...
How do I put those into the phpMyAdmin configuration
Or is there some other problem that I'm missing?
It's all explained reasonably well in the manual; those options are configured through additional directives in config.inc.php.
See for instance http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl and the few paragraphs there that follow. (Or, to be more specific, see
http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl_key,
http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl_cert,
http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl_ca,
http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl_ca_path, and
http://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_ssl_ciphers).
In my case, I was able to achieve this through adding these directives:
$cfg['Servers'][$i]['ssl'] = true;
$cfg['Servers'][$i]['ssl_cert'] = '/etc/mysql/client-cert.pem';
$cfg['Servers'][$i]['ssl_ca'] = '/etc/mysql/ca-cert.pem';
$cfg['Servers'][$i]['ssl_key'] = '/etc/mysql/client-key.pem';
Adjusting of course for the correct paths on your local system. Good luck!
Just delete the my.ini file in ...wamp64\bin\mysql\mysql5.x.x and that's it this file uses the port = 3308 which is'nt the default mysql port