Re: printing and global variables within TclBC script

From: Leandro Martínez (leandromartinez98_at_gmail.com)
Date: Thu Oct 25 2007 - 02:11:09 CDT

Hi Peter, thanks again,
I trying to compute something similar to the steered molecular dynamics
force but applied in some different way, in such a way that I need a
dynamic reference position which depends on the current position
of each atom and on the previous positions as well. I need to pass
this dynamic reference vector, which is changed for each atom once in a
while.
Anyway,
I have implemented that using the tclForces scripts, and I need
information for about 400 atoms, and the delay caused by the script
is not so bad actually, and I think it will be even less important
when I add water to my system. For now I think I can live without
the paralelization of the BC script :-). If later it turns out that the
simulation
is too slow maybe I try to write it within the code (God whish that I don't
have to do that...).
Thank you very much,
Leandro.

On 10/25/07, Peter Freddolino <petefred_at_ks.uiuc.edu> wrote:
>
> Hi Leandro,
> I see what you're after... sorry, I think I missed this initially. Yes,
> this is an intrinsic limitation of the tclbc implementation, that
> migrating atoms won't carry persistent data with them. I'm not aware of
> any way in tclbc to send data along with atoms. I don't think there's
> any way to know when an atom is going to migrate without adding some
> hooks pretty deep inside namd, which aren't currently present. I know
> having some way to synchronize data between processors in tclbc is on
> the grand unified namd wish list. Is there any data in particular that
> you're wanting to pass along?
>
> Peter
>
> Leandro Martínez wrote:
> > HI Peter,
> > Thanks for answer. The problem with printing (which is not the most
> important)
> > is that I don't know how to print information outside the calcforces
> procedure
> > for only one processor. But that's not so important, since it limited
> > printing only.
> >
> > I made some tests and the information is not persistant between
> time-steps, in
> > the case that one atom that was assigned to one processor is changed to
> > other processor. Something simple as
> >
> > for { set i 1 } { $i <= $natoms } { incr i } { set test($i) 0 }
> > proc calcforces { step unique } {
> > global test
> > while { [ nextatom ] } { incr test($i) }
> > }
> >
> > should return, at the end, "test = number of steps" for every atom.
> However,
> > if an atom that is being considered in processor 1 passes to processor
> 2, the
> > counter is set to 0 and, therefore, I cannot use the previous
> > information for this
> > atom to compute anything.
> >
> > I understand that this is an intrinsic limitation because the tcl
> > interpreters are
> > really independent, am I wrong? If I knew which atoms were going to be
> > exchanged between processors, at which time-step, than I could
> > overcome this problem, maybe writing data to a file and reading it
> > again, is this information anywhere?
> >
> > Thanks,
> > Leandro.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >>> Another question is that within the script I update some vectors that
> will
> >>> be used in the next step for the calculations. Fortunatelly there is
> only
> >>> atom-specific information on each vector, in such a way that the
> >>> distribution
> >>> among several processes is possible. However, I'm not sure how the
> >>> scripts handle these updated vectors when the script finishes, I mean,
> >>> do all the scripts update the vectors before launching the scripts
> again
> >>> in the next step or this information is lost?
> >>> Thanks in advance,
> >>> Leandro.
> >>>
> >> If the information is in variables defined as a global variable inside
> >> of the tclBCscript but outside of the calcforces routine, and brought
> >> into the calcforces routine as a global, it should be persistent
> between
> >> timesteps. You should probably test this yourself to verify that the
> >> information is being passed between timesteps the way you want...
> >>
> >> Best,
> >> Peter
> >>
> >>>
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:45:25 CST