make
(1), and have
a working knowledge of rsync
or
scp
. If you want to build these tools, then
I assume you know something about them.
msrc
toolsmsrcmux
installed and an unpacked copy of
the current master source. To get those (if you don't have them)
use the HTML
document.
I'm going to call that machine "msrc.example.com".
If that doesn't work to build the first local master source repository, then the source web site's instructions to use the package boot sequence.
The msrcmux
install instructions now allow you to
operate entirely as a mortal user, if you like.
Optionally you could install the service on a virtual machine, your
workstation, or in a chroot
. You need to
setup the network service (msrcmux
) on port 1,
which is a privileged port, or port 1081 which is not.
You may also need to build a reverse IP map for the instances you want to
configure. If you don't have configuration file for
hxmd
that describes your hosts, then
boot sequence only requires that you build one for
localhost
.
rsync
installed, see the
website for help, or install a binary
package for your OS.
If you want this to work on HP-UX you're going to have to install
some other packages first: gcc
,
flex
, bison
,
perl
, and maybe others.
I don't have any HP-UX hosts where I can test anymore.
You do need the unpacked master source, as stated above. If you don't
have the unpacked master source you should see
the msrcmux
HTML document,
which explains how to pull the tar
archives then
unpack them into the correct hierarchy.
tar
archive of the platform source for a
specified level 2 product. For example my workstation (sulaco) might
connect to the msrcmux
service on our local
master source server to ask for the platform source for
/etc/services
. If that server has a matching
master directory and an msrc
configuration
file that lists the client, then it streams the tar archive of the
configured source directory to the client (over RFC1078).
The directory sent follows local site policy. My local site policy
requires a make
recipe that has an
install
target (and several others see
the templates).
hxmd
's library directory), but it does have to
be on the host running msrcmux
.
Assure that your bin
and
sbin
directories to your shell's
search path, and hook in a perl
library in case
we need that, and go ahead and update MANPATH
:
Or login again, as you like.predator ~ 1 vi .profile # add lines as you see fit Add these lines near the end export PATH=$HOME/bin:$HOME/sbin:$PATH export PERL5LIB=$HOME/lib/perl5 export MANPATH=$HOME/man:${MANPATH:-/usr/man:/usr/local/man:/usr/share/man} save and exit predator ~ 2 . ./.profile
Next we'll setup the environment for the build. We need to set 3 variables.
MPS
CFG
msrcmux
services should read to configure
this host. Note that this is a path on the service provider which may
not be the local host.
MSRCMUX
msrcmux
service is on a port other than 1
.
In that case we'll have to set MSRCMUX
to specify both
the port option (-p 1081
or
-p 1
) and the name of the RFC1078 service
(msrcmux
).
predator ~ 3 MPS=localhost CFG=/tmp/myself.cf MSRCMUX="-p 1081 msrcmux" predator ~ 4 export MPS CFG MSRCMUX
Then we need to muxcat
down the
recipe file to get started building tools.
Check thatpredator ~ 5 muxcat $MPS $MSRCMUX Pkgs/msrc_base/relocate $CFG |tar xf - predator ~ 6 make -e hosttype predator ~ 7 make -e msrc_base make[1]: Leaving directory `~' echo 'next "make -e install_base"' next "make -e install_base" predator ~ 8 make -e install_base ...lots more output... *** Mortal install of installus skipped ... *** Mortal install of op skipped echo 'If you need it, "make -e entomb_base" or "make -e ad-hoc-list"' If you need it, "make -e entomb_base" or "make -e ad-hoc-list"
mk
's %~
starts with your home directory path.
Next we'll build the stray level2 products that I have recipe rules. Some of these fail based on platform or installed library depends, but that's OK.predator ~ 9 mk -V ...
Next we'll see what didn't get built (besides level3s):predator ~ 10 make -e ~/usr/local/lib/libgdbm.a standard GNU build spew... predator ~ 11 make -e ~/usr/local/sbin/level2s ~/usr/local/bin/oue ... predator ~ 12 oue -V oue: $Id: oue.m,v 2.... oue: safe file template: oueXXXXXX predator ~ 13 make -s ad-hoc-list | xapply -fx 'make -e %1' - lots of build and installtion output Failure to build ~/usr/local/sbin/level3s
That session is pretty typical. It is possible to script the whole thing if all your hosts need to pull the same updates. It is also possible to push the pull recipe to the target host.predator ~ 14 make -s ad-hoc-list List of things that did not build predator ~ 15
setuid
applicationsop
and installus
,
then you will have to su
to the superuser to
build them. You'll need to set
HOME
(which su
resets):
predator ~ 15 su Password: # export HOME=`pwd` # make -e $HOME/bin/op $HOME/sbin/installus # : if you have a local op rule-set master source installed, run this too # make -e $HOME/lib/op/class.cf # exit
Alternatively op
may be installed in your
bin
as a sentinel. See op
's
references HTML document
for details, read past "Compile a sentinel copy of op
".
Then use something like:
where you can substitute any group you are in forpredator ~ .. make -e GROUP=`id -gn` ${HOME}/bin/op
`id -gn`
.
Test that by running the new binary under -V
and
then under -S
. Then build your rule-base.
Sadly installus
is not going to work as a
mortal cannot read the password file. You can build it and install it, but all
the password requests fail, and it is not safe to run without that
authentication.
This you'll have to run $predator ~ 15 make -e man-all
MANPATH
through
$HOME/man
to see the pages.
Fix that in your profile, if you didn't in the first step.
install
backup files
you should remove them.
Or just remove the whole source cache, since the source cache is is easy to rebuild:predator ~ 15 find ~/usr/src -name Makefile -print | \ xapply -fx 'cd %[1/-$] && make clean' -
That will not cleanup anything you built as the superuser (you'd have topredator ~ 15 rm -rf ~/usr/src
su
to do that).
Then remove install
's back-out files.
I remove the pull recipe so we don't use it again without
getting a fresh copy from the master source pull server.
Also let's remove the version of muxcat
in
our bin, so it doesn't get stale:
Lastly the values ofpredator ~ 16 purge -vd0 ~/usr predator ~ 17 rm -f ~/Makefile ~/bin/muxcat
MPS
and
CFG
were recorded in
auto.cf
, let's look at them:
predator ~ 18 grep MSRCMUX ~/usr/local/lib/hxmd/auto.cf MSRCMUX="msrcmux" MSRCMUX_MPS="msrc.example.com" MSRCMUX_CFG="/usr/local/lib/hxmd/ksb.cf"
The information encoded in that file allows local automation to
take action to see if it is available (-BMSRCMUX
)
then fallback to another tactic (-N
else
) when it is not available:
This could be handy if you have automation to pull newer versions automatically. Because a push will clobber the information inpredator ~ 19 hxmd -Cauto.cf -G localhost -BMSRCMUX -N '%0echo fail' \ 'echo MSRCMUX MSRCMUX_MPS MSRCMUX_CFG' msrcmux msrc.example.com /usr/local/lib/hxmd/ksb.cf
auto.cf
, which is not always a good thing.
If you need a local policy file to keep these parameters, build one.
(And pushes to your home directory are not commonly supported.)
There is a spell in the recipe file to update the copy in your
home directory with one that has the values from
auto.cf
set as the default values:
You might use it more as a reference for building your own update scripts. The recipe file is clever in that it works with or without being run throughpredator ~ 20 make reload ... hxmd -Cauto.cf -Glocalhost -BMSRCMUX -N "%0echo fail; exit 69" \ "sh %1 | install -c - your-home/Makefile" update.m4 predator ~ 20
m4
.
distrib
's lib
We do not install distrib
's
library directory. Local policy still uses it, but we're trying to
get off that really bad habit. If you want to
build it (viz. for local.defs
):
predator ~ .. make -e $HOME/lib/distrib
msh
doesn't work
The message shell (msh
) has a
RUN_LIB
macro we could set,
but you should install it someplace you can trust before you make it
anyone's shell. This prevents the -V
option
from working, since it won't succeed without the message directory.
hostlint
's policy
The libexec/hostlint-policy
directory won't
work unless your host is marked with
the policycache
service. Which is a local
site policy here, but mayhap not at your site.
But hostlint
's versions.hlc
takes an undocumented environment variable PSEUDO_ROOT
to force it to check versions rooted below slash. So the policy is actually
useful to you. Fetch it with an explicit rsync
reference:
predator ~ .. rsync -v hostlint.example.com::hostlint/versions.hlc . versions.hlc... predator ~ .. rsync -v hostlint.example.com::hostlint/vercmp . vercmp.pl... predator ~ .. PSEUDO_ROOT=$HOME ./versions.hlc
I moved to ANSI C (for the most part), which means you need a better C compiler than HP-UX ships with.
NFS mounted home
directories limit the usefulness of this tactic, as you may have
to support different complier output formats from a common home
directory. That's your local site policy in action, not mine.
The work-around is to build subdirectories name for functions of
uname
output, then set variable in your
profile to match those. E.g. bob builds a copy with
"HOME=/home/bob/minix7-i786" on his new Minix workstation.
alias
bugs
Some versions of ksh
and bash
will not let go of a tracked alias. So if you tried to
run xapply
before you installed it, then
the shell will continue to fail to see the new binary, unless you do
something screwy:
predator ~ .. exec csh % exec ksh -i -o vi predator ~ ..
rsync
and
tcpmux
If your instance can't get to the rsync
port or the
tcpmux
port from your host, due to network or
firewall restrictions, use ssh
port forwarding
to map those ports to local loop-back ports. See below.
msrc_base
installed you
may restore MPS
and CFG
from the saved values in auto.cf
:
$ eval `hxmd -Cauto.cf -G localhost "echo export CFG=MSRCMUX_CFG MPS=MSRCMUX_MPS`
Then just pickup where you left off. This is also a good spell to
use for automated update scripts. Later you can use
efmd
and remove the echo
shim.
ssh
port forwardPATH
on the remote host
visits ~/bin
. If it doesn't, then you'll need to
build that directory and add it to your search path.
And that you've edited your .profile
as in
step 1 above (on the target machine).
First build a hxmd
configuration file which maps
our local hostname to the target machine's
HOSTTYPE
,
HOSTOS
, and what ever other attributes
required to meet local site policy. We do this because the proxy
connection comes from this host (localhost
or the reverse name for our IP address), since it makes the network
connection to the tcpmux
service.
local$ vi /tmp/proxy.cf
Next, scp
the Makefile.host
from
the master source cache to Makefile
on
the remote machine. Also copy the muxcat
program
(a perl
script) to bin/muxcat
.
Use ssh
's -R
option to
forward the local tcpmux
service to
the remote machine on port 3001.
From the remote shell setlocal$ scp `whence muxcat` remote:bin/muxcat local$ scp /usr/msrc/Pkgs/msrc_base/relocate/Makefile.host remote:Makefile local$ ssh -R 3001:$MPS:1 remote
MPS
, with a prefix
to set the port option to muxcat
:
remote$ export MPS="-p 3001 localhost" CFG=/tmp/remote.cf remote$ make -e hosttype pick up at command 7
Note that this makes the MPS
attribute in
auto.cf
is less-than-useful. This is
because we may not have a clear network path back to the host, since
we needed an ssh
tunnel to get here the
first time, and the value in auto.cf
is
not going to represent the client's network view. This also makes the
restart spell above less-that-useful.
mpull
HTML document for
another way to pull level 2 products. Since the pull recipe knows how to
install mpull
you may be able to use that to
upgrade level 2 products later (but mpull
doesn't
know how to fit them into your home directory).
It is also possible to use a sshfs
mount of
the local master source cache to boot-strap a machine (by building
mmsrc
under a local source cache). I've not
published the recipe to do that, mostly because it is harder. If you
need that spell, let me know.
$Id: relocate.html,v 2.15 2014/01/29 21:29:46 ksb Exp $ by ksb.