Technical b r Report - International Military Testing Association
Technical b r Report - International Military Testing Association
Technical b r Report - International Military Testing Association
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
UNCLASSIFIED,‘UNLIMITEDw “‘$1<br />
<strong>Technical</strong><br />
r <strong>Report</strong><br />
distributed by<br />
Defense <strong>Technical</strong> Information Center<br />
DEFENSE LOGISTICS AGENCY<br />
Cameron Station � Alexandria, Virginia 22314<br />
UNCLASSIFIED/UNLIMITED
. .<br />
oado2
-* . ‘: .;‘*nn. :: ;.... 1. . . _ _,.“. _ . .- ..~ ~ . . . -. ,, . . . _. - ._.-.. -_- .._. -._..,.. - -- . .- ,-...<br />
Coordinated By:<br />
AIR FORCE HUMAN RESOURCES LABORATORY<br />
AIR FORCE SYSTEMS COMMAND<br />
Brooks Air Force Base, Texas<br />
. F a,..,,. - --ovcd for public release; distribution trnlimftc~<br />
El Tropicana Motor Hotel<br />
San Antonio, Texas<br />
28 October - 2 November 1973<br />
1<br />
. . i<br />
;j.<br />
: ‘e...<br />
3<br />
,J
---. .-<br />
Ct.! bfi'L ‘110 JvrclrL<br />
--- _ . . . ..l.-<br />
i.-. _..-.- 4<br />
. .2<br />
J<br />
L I<br />
.<br />
/” _<br />
, / /’<br />
I d/ -7 _._.___ __---.-.-- - .-..- I<br />
;~, 1<br />
I<br />
PROCEEDINGS<br />
.)<br />
45ttrjAnnual Conference<br />
I<br />
.__.- -<br />
of the<br />
/ <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong>
::<br />
FOREWORD<br />
The 15th Annual Conference of the Mili'.ary <strong>Testing</strong> <strong>Association</strong> was<br />
held at the El Tropicano Motor Hotel, San An:onio, Texas, 28 October -<br />
2 November 1973.<br />
The papers presented at the 15th Annual Conference of the MTA reflect<br />
a diversity of subject ,matter. These presentations from representatives<br />
of both the m?'litary and civilian community reflect the opinions of the<br />
authors and are not to be construed as official or any way representative of<br />
the views and policies of the tini ted States Armed Services or the armed<br />
services of the foreign countries participat!ng in this meeting,<br />
._ +<br />
/<br />
iii<br />
I
- OFFICIAL PROGRAM -<br />
JJ 15th Annual Conference<br />
MILITARY TESTING ASSOCIATION<br />
El Tropicano Lobby<br />
1300 - 170@<br />
El Tropicano Lobby<br />
Oil00 - 1600<br />
Hemisfair Room<br />
Ground Floor<br />
0900<br />
0900 - 0215<br />
0915 - 0945<br />
1015 - 1030 (<br />
28 October through 2 November 1973<br />
Sunday, October 28<br />
Registration<br />
Monday, October 29<br />
Registration<br />
Session Chairman<br />
Co1 Theodore B, Aldrich<br />
Air Force Human Resources Laboratory<br />
Conference called to order<br />
Greetinas by AFHRL Commander<br />
Co1 Harold E, Fischer<br />
Air Force Human Resources Laboratory<br />
Keynote Address 1<br />
Lt Gen John U. Roberts<br />
'Deputy Chief of Staff Personnel<br />
Headquarters, United States .lir Force<br />
Introductory Comments<br />
Co1 Theodore B. Aldrich<br />
Air Force Human Resources Laboratory<br />
4dministrative Announcewnts<br />
Coffee Break
.- . . . _ -._* _.-.,. ..-_. __...,_.... ..^_...I_. . . -_ . . . ,_ _.., . _ .,., _ ,,“.. ;a _ ..,____-<br />
1030 - 1200 Symposium<br />
Eiut&-4 cmice 4 C,WCJtiJlg 6olr miettahy pch6ohmance<br />
and bc.hav.io~~<br />
1200 - 1330<br />
Hemisfair Room<br />
Ground Floor<br />
1330 - 1430<br />
Chairman<br />
Dr. Ralph R. Canter<br />
Office of the Assistant Secretary of Defense<br />
(M&w)<br />
Participants<br />
Maj James Taylor<br />
United States Air Force<br />
Comdr James Murphy<br />
United States ::avy<br />
Lt Co1 Todd Graham<br />
United States Army<br />
Dr. D. Brian Murphy<br />
Systems Development Corporation<br />
Dr. Robert G. Smith<br />
Human Resources Research Organization<br />
Discussant<br />
Dr. Norman 3. Kerr<br />
Naval <strong>Technical</strong> Training Command<br />
Lunch<br />
Session Chairman<br />
Mr. Harald E. Jensen<br />
Air Force Human Resources Laboratory<br />
Symposium<br />
Chairman<br />
Dr. Lonnie D. Valentine<br />
Air Force Human Resources Laboratory<br />
Participants<br />
Maj Verna S. Kellogg III<br />
Armed Forces Vocational <strong>Testing</strong> Group<br />
Capt Melvin T. Gambrell, Jr.<br />
Armed Forces Vocational <strong>Testing</strong> Group<br />
Dr. Harry D. Wilfong<br />
Armed Forces Vocational <strong>Testing</strong> Group<br />
vi<br />
?
1430 - 1450<br />
1450 - 1505<br />
1505 - 1535<br />
1535 - 1550<br />
1550 - 1610<br />
Monterrey Suite<br />
Room 909<br />
1730 - 1830<br />
Acapulco Room<br />
Room 902<br />
DeveCapme& 06 ti:c Az.d Stiviccb Wacati~na.f<br />
Apt&& Gatietij I<br />
Dr. M. A, Fischl<br />
United States Army Pesearch Ins$‘itute for the<br />
Behavioral and Social Sciences '<br />
-<br />
Coffee Break !!<br />
Session Chairman<br />
Capt ,.lames A. Hoskins<br />
Air Force Human Resource: Laboratory<br />
NM devc4ogmca.t~ in dcieus e htguagc pro &ienuj<br />
te5 .ting<br />
Drs. C. D. Leather-man and A. Al-Haik<br />
United States Army, Defense Language Institute<br />
Presented by: Mr. Sydney Sako<br />
EnC.i5t~d ~~iccG.crt and chd.5i~i~dic~z fCA tiq in<br />
t:1 c u. s. Awj<br />
Dr. Milton H. Maier<br />
United States Army Research Institute for the<br />
Behavioral and Social Sciences<br />
Presented by: Dr. M. A.(Fischl<br />
The. dcvc?c.pwt t 5 Ei:gLih Ccmp:chca:icn L:.vel<br />
A camit:g tc.3 t. 404 $2 tei;gn 4 ALde?c,ttA<br />
Mr. Cortez Parks<br />
United States Army, Def,ense Languaae Institute<br />
E%D OF fWi[lAY SESSIONS<br />
oeoo - 1600 Registration<br />
’<br />
.<br />
I<br />
MTA President's welcoming and social hour<br />
Tuesday, October 30<br />
vii, ,.<br />
!<br />
/I<br />
I i<br />
_.<br />
I<br />
..,<br />
.i, : 1.<br />
: r<br />
-7
JJ Hemisfair Room<br />
Ground Floor<br />
0845 - 0900<br />
0900 - 0920<br />
0920 - 0945<br />
0945 - 1000<br />
1000 - 1155<br />
session Chairman<br />
Capt James A. Hoskins<br />
Air Force Human Resources Laboratory<br />
:a11 to order<br />
izn cxptincntaL, muetimedia ccwccfsz dcvcecpmttt<br />
EOU-AC. 60.2 nw mcrtt& b.tnndahd!, uhmcrt<br />
Mr. George P. Scharf<br />
USAF School of Applied Aerospace Sciences<br />
3p.Gn.l ut.LLizsticn 06 cn- the-job Omiuixg aud<br />
tccfIrricae ttlaiuriirg OChOO~<br />
Capt Alan D. Dunham<br />
National Security Agency<br />
-<br />
Coffee Break<br />
Session Chairman<br />
Mr. Manuel Pina<br />
Air Force Hlrrran Resources Laboratory<br />
Symposium<br />
Ta.m5dnticn 05 tml~~tirlg xe,cmcl1 irtto .tininhzg<br />
acticl: - a mi,5ing Link<br />
Chairman<br />
Dr. G. Douolas Mayo<br />
Naval <strong>Technical</strong> Training Command<br />
Participants<br />
Fkom tiltltc 1*iclcpcinR: 05 a &taining manngut<br />
Mr. Walter E. McDowell<br />
Army Training and Doctrine Command<br />
Ftom titc vicqcirzt c$ a tiaining 4tica'rcfietr<br />
Dr. Norman 3. Kerr<br />
Naval <strong>Technical</strong> Training Command<br />
Discussants<br />
Lt Co1 Donald F. Mead<br />
Air lraininq Command<br />
Dr. Earl I. Jones<br />
Navy Personnel Research ?I Development Center<br />
viii<br />
*.<br />
t , ._<br />
‘... . .<br />
.
1155 - 1200<br />
1200 - 1330<br />
Monterrey Suite<br />
Room 909<br />
1330 - 1500<br />
Hemisfair Room<br />
Ground Floor<br />
1330 - 1340<br />
i340 - 1400<br />
1400 - 1420<br />
1420 - 1445<br />
1445 - 1510<br />
1510 - 1525<br />
Reaction Panel<br />
Comdr Bruce Cormack<br />
Canadian Armed Forces<br />
Mr. William B. Lecznar<br />
Air Force Human Resources Laboratory<br />
Mr. Ralph Canter<br />
Office of the Assistant Secretary of Defense<br />
WW)<br />
Administrative Arxouncements<br />
Meeting of the Steering Committee<br />
Session Chairman<br />
Mr. William J. Stacy<br />
Air Force Human Resources Laboratory<br />
Administrative Announcements<br />
TtiJt @zdba& a,ld i%uXng cbjetivcs<br />
LtJG Carroll H. Greene<br />
United States Coast Guard Training Center<br />
Assessment Systems, Incorporated<br />
Ptro5pcctivc cfiic$ pcttv ob&m &adchsltip/<br />
ma,!cgclw,?~ 5 c.mina:<br />
Comdr G. C. Hinson<br />
United States Coast Guard Training Center<br />
Tmn5certdwtal lleditfltion (TM) : A t1twa.t otr<br />
tcchniquc in ;(uAm2 pemonn& eva&&orl<br />
Dr. R. 0. Waldkoetter<br />
Consultant, Education and Personnel Systems<br />
Coffee Break<br />
ix<br />
‘7
1525 - 1515<br />
1545 - 1600<br />
1600 - 1630<br />
Hemisfair Room<br />
Ground Floor<br />
0830<br />
0830 - 0850<br />
0850 - 0920<br />
Session Chairman<br />
Capt Nichols: C. Varney<br />
Air Force Human Resources Laboratory<br />
7tJpticuf~oru 06 camce in.5 t711ctiqn @)I mie.itnry<br />
t&5 titi,tg<br />
+<br />
Cr. Ronald W. Spangenberq<br />
Air Force Human Resources Laboratory<br />
Presented by: Dr. Roger J, Pennell<br />
A5.5c~5mc1t t 05 comp!cx p:,!r&cfno to.2 ccotd~~tio~l<br />
Cc.nz,~illg ‘Wd yc.zf;C?LnlL7rlCC. ClllAillg 130 d.wqs oh<br />
ill fcz5ivc .fe5 tir1g<br />
Dr. Randall M. Chambers<br />
Georgia Institute of Technology<br />
Dr. Rayford T. Saucer<br />
Veterans Adminixtration Hospital<br />
P.tU ~.iCiCllUj lJJCU5 UWtJWtt i,I di%jl t b&XdntO~~5<br />
Dr. Edwin Cohen<br />
The Singer Company, Simulation Products Division<br />
END OF TUESDAY SESSIONS<br />
Wednesday, October 31<br />
Session Chairman<br />
Mr. Andrew T. Garza<br />
Air Force Human Resources Laboratory<br />
Lt Comdr C. L. Idalke'l-<br />
Naval Guided Missilbs School<br />
Data Design Laboratories<br />
X<br />
i<br />
,
I/<br />
: 0920 - 0935<br />
0935 - 0950<br />
0950 - 1OOS<br />
1005 - 1020<br />
1020 - i’235<br />
050<br />
120<br />
1120 - 1200<br />
1203 - 1330 Lunch<br />
Hemis fair Room<br />
Ground Floor<br />
Lt Co1 Quay C. Snyder<br />
United States Army War College<br />
SScrdirlct& hhtingb: !a!/ rtot?<br />
Drs. k!. H. Githens and R. S. Elster<br />
United States Naval Postgraduate School<br />
The Kec5~cfc Stlldlj-et.ccttortic techt:ici~lII &wL<br />
yea3 eva&faliorr 04 Thficc tq;3&5 0s tmkling<br />
Dr. Virginia Zachert<br />
Medical College of Georgia<br />
Coffee Break<br />
Session Chairman<br />
Mr. Kenneth G. Koym<br />
Air Force Human Resources Laboratory<br />
Mr. klliam C. Osbom<br />
Human Resources Research Organization<br />
The Sc.C; Eva4hat~on Tc.d~ui.quc (SET! bhadia<br />
Dr. John 3. Holden<br />
llnited States Army Ordnance Center and School<br />
A mcdda: ap:~zcach to podicicncy tcJ.tA:g<br />
Dr. Robert W. Stephenson, Mr. Warren P. Davis,<br />
and Mr. Harry I. Hadley<br />
American Institutes for Research<br />
Mrs. Bertha H. Cory<br />
United States Army Research Institute for the<br />
Behavioral and Social Sciences<br />
The ~.tcctcing cicmnnd bar; kiman pekfc-wwcc .totitg<br />
Mr. J. E. Gerber, Jr.<br />
United'States Army Infantry School<br />
Presented by: Capt John P. Otjen<br />
Session Chairman<br />
Lt James W. Abellera<br />
Air Force Human Kesowces Laboratory<br />
xi<br />
3<br />
V. ._ _- -.C . . - _. -... .<br />
! ..I . : I-.
1330 - 1350<br />
1350 - 1420<br />
1420 - 1435<br />
1435 - 1450<br />
1450 - 1505<br />
1505 - 1520<br />
1520 - 1540<br />
1540 - 1600<br />
1600 - 1615<br />
Ctiwion-m@certced peh(oe.mancc testing<br />
in combat M~S ~tS.L&<br />
Mr. John F. Hayes<br />
UHS/Matrix Company<br />
The Ah Fohcc wi(c: Heh brtowZcdge 06, artd<br />
attitucic,5 .tc.m'ui, titllc Aih Fohcc<br />
Ors. John A. Belt and Arthur B. Sweney<br />
Wichita State University<br />
Cadet m.tcntion at titc ROIJCU? SGtct~~ CoLkgc 06<br />
Carlada a,!, a &rtctio~ 06 pcti o~x&.bt:~, leadch,lrip<br />
pc.e6o,tmnncc,and bioq~a~~hic~ vazi&th<br />
Lt Co1 G. J. Carpenter<br />
Royal <strong>Military</strong> Colleqe of Canada<br />
Lt Knight 0. Cheney<br />
United States Coast Guard Reserve<br />
~rtcotuLtcn g4oup5 a3 a mean5 doh e~&ctirlg<br />
attitctdimt change<br />
3 Dr. Peter F. Newton<br />
National Security Agency<br />
Coffee Break<br />
Session Chairman<br />
Mr. James M. Wilhourn<br />
Air Force Human Resources Laboratory<br />
A n cvduatin 06 cliagnob t i c voca6uikm~ tca.tbtg<br />
in Ati Fotcc tcchrticd .tYL~tilg<br />
Lt William P. Mockovak<br />
Air Force Human Resources Laboratory<br />
Uni6o?uncrl pllqbicinrts w.Lthout paticrt&: A<br />
dquandching 06 bc~hce hc.5ouhccJ?<br />
Dr. Gary B. Brumhack<br />
American Institutes for Research<br />
An cxpc&~cnthc &@Zzmctztatioh 06 comptctr<br />
ru,A!&d admiosib& phoDabibL@ Rebtiting<br />
Mr. W. L. Sibley<br />
RAND Corporation<br />
xii
_<br />
Monterrey Suite<br />
Room 909<br />
1730 - 1830<br />
Hemisfair Room<br />
Ground Floor<br />
0830<br />
0830 - 0900<br />
0900 - 0915<br />
0915 - 1005<br />
lC35 - 1020<br />
1020 - 1045<br />
I ,<br />
END OF WEDNESDAY SESSIONS<br />
!’<br />
MTA President's social hour<br />
Thursday, November 1<br />
Session Chairman<br />
Lt Sarry P. McFarland<br />
Air Force Human Resources Laboratory<br />
Call to order<br />
CqAK!USEL: k cctn,uutci p,'logtrrnl li*'Itid~ bdkd.5<br />
qcLLLe,itatiw pzedicto.7.3 $08 qua&ihLii*c cti-tcticn<br />
p4cdicSvz p4oblem5 I<br />
Dr. William J. Moonan /<br />
Navy Personnel Research and Development Center<br />
I<br />
Dr. Roger Pennell<br />
Air Force Human Resources Laboratory<br />
AppI!icntiorzs o{ p4ecfkto olrdcukg acd rc1:ctYicn<br />
bq Baqesiatt-decinicw tedyque<br />
'II *<br />
I<br />
Mr. S. E. Bowser<br />
Navy Personnel Research and Development Center<br />
P<br />
Coffee Break<br />
J"<br />
Session Chairman<br />
Mr. William 3. Phalen<br />
Air Force Human Resources Laboratory<br />
fkw adtn~ndibk pwbab.iLiA~ ~CGLJICJ a$;(ect5 arrd<br />
ib U~&c&d htj tile LaAgclr Jybtcm 06 irtcextiL*e.~<br />
Dr. Thomas A. Brown<br />
RAND Corporation<br />
:; . .<br />
x111<br />
I<br />
r<br />
!<br />
,
1045 - 1100<br />
1100 - 1130<br />
1130 - 1145<br />
1145 - 1200<br />
1200 - 1330<br />
Hemisfair Room<br />
Ground Floor<br />
1330 - 1355<br />
1355 - 1420<br />
1420 - 1440<br />
Dr. Thomas C, Tuttle<br />
Westinghouse Behavioral Safety Center<br />
. .<br />
Kc~*icrc a;5 Ai.t Fo?cc. jc6 JntiJ@ctio,L culd c6VMC.t<br />
d;\*c2CpmC,lt ‘tcJc‘lt&<br />
Mr. R. Bruce Gould<br />
Air Force Human Resources Laboratory<br />
A mc tizcdo+r $0: dc&zmihg jot snti~ t(ac2i.00<br />
i,: ti:L! (1 � �� ����␛�<br />
Dr. Lawrence A. Goldman<br />
Naval Occupational Task Analysis Pr@Jram<br />
Dr. William J. Figel<br />
Science Research Associates<br />
Lunch .<br />
Session Chairman<br />
Mr. C. Amos Johnson<br />
Air Force Human Resources Laboratory<br />
Job RtiC,cu’ptitw
_A --- -.-a,.- s...-*_-..s.. .-w_- ..__ .<br />
.<br />
1440 - 1500<br />
1500 - 1515<br />
1515 " 1540<br />
1540 - 1610<br />
1610 - 1625<br />
Dr. Robert G. Smith, Jr.<br />
Hwan Resources Research Organization<br />
I<br />
1625 - 1630 Adnr&nistrativc Announcements<br />
Terrace Ballroom<br />
1900 - 2000<br />
Rcc‘ati.le aptitude .zcquikenrcrr& weah<br />
Or, Raymond E. Christal and<br />
Sqn Ldr John W. K. Fugill, RAAF<br />
Air Force Human Resources Laboratory<br />
Coffee Break<br />
Session Chairman<br />
Mr. Wayne S. Archer<br />
Air Force Human Resources Laboratory<br />
Mr. Clifford ?. Hahn<br />
American Institutes for Research<br />
Ccpi tivc ccmpCcxifc(: Iti cust~ciation with<br />
3 c Zccticv $c 7 mii i.Lxy Cen,fc 23Iri.p &o&h<br />
Sqn Ldr Brian N. Purry<br />
Royal Air Force<br />
END OF ThURSDAY SESSIONS<br />
I<br />
I<br />
Social Hour - "pay-as-you-go" bar<br />
2000 - 2200 Banquet - Invited S&aker<br />
Brig Gen Conrad S! Allman<br />
Commander, USAF Recruiting Service<br />
Friday, Nove$er 2<br />
I<br />
Hemisfair Room<br />
Ground Floor<br />
0845<br />
I<br />
Session Chairman<br />
Mr. William E. Alley<br />
Air Force Human Resources Laboratory<br />
Call to order<br />
xv<br />
I<br />
I<br />
-. .
Il<br />
0845 - 0900<br />
0900 - 0920<br />
0920 - 0940<br />
0940 - 1005<br />
_.~~<br />
1005 - 1020<br />
1020 - 1035<br />
1035 - 1100<br />
1100 - 1130<br />
1130 - 1150<br />
Administrative Announcements<br />
khaiktistAation 04 mutt+&-doicc ttitb to<br />
non-w&c,u via tape r,ccor,da: CNc JtudiU<br />
Dr. Joseph L. Boyd, Jr,<br />
Educational <strong>Testing</strong> Service<br />
RaciaE did@zctlccs & AFQl, ACE, al:d WATS dco%c~<br />
0s A&z Fc.tcc tmic xhninec5<br />
Capt Stephen B. Knouse, Sgt David F. McGrevy,<br />
and Sgt Ronnie A. Thompson<br />
Air Force Human Resources Laboratory<br />
Dcc~ tic USAF CJ~+X.: EXcg~qhicaC InwXcy<br />
p.*vticrz 0s Zhc AFOQT imdc~ticrtt
1150 - 1210 A pu~po~ cd 4 pa.tiu..t o.ticrtttion/d& otic.&mXon<br />
#.igl~t ~~ainbtg c0wzp.t<br />
Mr. Patrick J. Dowd<br />
USAF School of Aerospace Medicine<br />
1210 - 1230 I Concluding Comments<br />
END OF CONFERENCE<br />
xvii<br />
,<br />
.n<br />
:<br />
..-: _
1.<br />
II.<br />
III.<br />
IV.<br />
V.<br />
VI *<br />
VII.<br />
VIII:<br />
IX.<br />
x.<br />
TABLE OF CONTENTS<br />
I I<br />
Keynote Address. , . . . . . . . . .Lt General john W. Roberts<br />
I<br />
Symposium synopsis: Ccoyc:atiue<br />
aptihdc sc5cmch p2oyuvM cond1lctc.d<br />
?h~u~1~ t&c hmcd Fc!‘rcc~<br />
VccntiorlaC 7~ tixg Gtroup . . . . .Dr. L.D. Valentine<br />
'&j&='Verna S. Kellogg I!1<br />
Capt Melvin T. Gambrell, Jr.<br />
'Dr. Harry D. Wilfong<br />
Dcvc,fiopmc.nt 05 the Azncd Scmicti<br />
Vocdtiorial Aptitude BczXXe~~. . . . . . . . . . Dr. M.A. Fischl<br />
NW der%~op!?V~~ 13 &I de$m,c<br />
Zampqc ptc~icimy tot&g . . . . . . . Dr. C.D. Lecthenan<br />
Dr. A. Al-Z-laik<br />
Edistcd 5cCcctic11 and cksi.{icatic,t<br />
tot&g in 3~. I;. S. Auny . . . . . . Dr. Miltor H. Maier<br />
AN c.tpetim~ hi’, mu.W!hedia c~zxcz<br />
i<br />
I<br />
. . .Mr. Corter'Parks<br />
dcvticpmeJ1 t ccuuc. gcr WlC’ V!C.azt i<br />
Jtwdm.h aibnclt . . . . . . . . . . . . . Mr. George P. Sharf<br />
Optit!t,d utiti:at.icn o; CR - tltc -jab<br />
.Ct&~ibtiJ:~ - Id tcchcicai fmilt iq<br />
~clzooe . . . . . . . . . . . . . . . . . . Capt Alan D. Dunham<br />
Slymposium: T-~~~sPtLtion 06 ttiting<br />
.tcscat& i,: to .tti&;g act&w - a .I<br />
i<br />
n;ti&iJlg till: . . . . . . . . . . . . . . . Dr. G. Douglas Mayo<br />
Mr. iJalter E. McDowell<br />
I Dr. Norman 3. Kerr<br />
/ Lt Co1 Donald F. Mead<br />
Dr. Earl I. Jones<br />
Tc3 t &ccdb.tcl: ar!d Ouuhiug cbicc-<br />
-tivc5 . . . . . . . . . . . . . . . . .Lt 36 Cari*oll H. Greene<br />
, -<br />
xix<br />
Page<br />
1<br />
20<br />
27<br />
34<br />
45<br />
52<br />
60<br />
92<br />
96<br />
101<br />
108<br />
114-<br />
118
11<br />
XI.<br />
XII.<br />
XIII.<br />
XIV.<br />
xv.<br />
XVI.<br />
XVII.<br />
XVIII.<br />
XIX.<br />
xx.<br />
XXI.<br />
:<br />
xx<br />
.<br />
3<br />
Page<br />
129<br />
141<br />
147<br />
154<br />
170<br />
178<br />
2D7<br />
234<br />
249
XXII. &tics 06 &?&I .:CAdcfl’dl AtudiCA<br />
u5im3 .thc Scl.6 &vn2uaticn Tcchiquc<br />
(SET .$tctdic~l. . . . . . . . . . . . . Dr. John J. Holden<br />
XXIII. A. mcdu~a~7 npy zcad! 0 ptc
XXXIII. A &cto!~ anot?ytic apphoa& to 2%~<br />
cz,itctio~~ pzobkm. . . . . . . . . . . . . .Dr. Ro'ger Pennell 508<br />
I<br />
XXXIV. Appticnticn5 o 5 )J WdiCfOh CNkA-blg<br />
aud 3ct%dior1 by a Garjcniatt-r;‘ccision<br />
tcchiquc.. . . . . . . . . . . . . . . . . . Mr. S. E. Bowser 525<br />
xxxv . G?rotthg dmznd5 on honan , *<br />
kC.5 cu.zcc,5 : A U~CIC ,(~tn i;:.&? tty. . . . . Dr. Thomas C. Tuttle 543<br />
YYV” ..,\. I . &wCc~ 06 Ait Fctcc job $nti,-<br />
@!,tion Nld ca’rc.c~‘r dcw?~oymci,. t<br />
.zc.5~a~l1 . . . . . . . . . . . . . . . . . Mr. R. Bruce Gould 560<br />
XXX’I I I . A metflodceogy 6o.t dctc/vktkg<br />
job satis jactioti iti the if. S.<br />
5lavy . . . . . . . . . . . . . . . . .Dr. Lawrence A. Goldman 573<br />
XXXVIII. lnttxe3.t Suwcy5 and job p&xxrnti1.t<br />
41 .I%C A,?rnz.d Sc~v.ico5 . . . . . . . Dr. William J. Fiqel 585<br />
XXXIX. Job dticGp3iotl it1 tix<br />
hncd Fotccs ti a btih<br />
50.1 ~1a&5-i5 atld evaC- I<br />
llation . . . . . . . . . . . Flottillenadyiral Guenter Fiebig 596<br />
XL. Pos.~ibitititi cold .kLniXnR.icu<br />
05 job mla@.ti and job b<br />
evctecttion i.11 &cd Fo4ce5 . . . . . .Lt!Col H. E. Seuberlich 608<br />
XLI. Ocaq.~.ztic~la~ am&5 i5 i n .tfic<br />
'Ro# &othneinjt iLi.t Forrcc . . . . . . .dapt Wayne S. Sellman<br />
Sqn L'dr John W. K. Fugill 622<br />
XLII. Col93xticj1 05 peet:{c7~na~ce<br />
d&x at &'f're jo6 .bk -!kvee . . . . . . ./Mr. Clifford P. Hahn 643<br />
I<br />
XLIII. CcgnLtive compk+itf/: 7.t5 a35ocia-<br />
.tiOlI tt?ltll .54L&tiO,l SOh miediZtrJ<br />
Ceadc&tip aok% . . . . . . . . . . . Sqn Ldr Brian 14. Purry 659<br />
XLIV. ,vat* ccrnccpz.3 5ofL the nrcCL5uMm~Lt<br />
//<br />
06 attitudc5 attd mo.tivti . . . . . . Dr. Robert G. Smith, Jr. 674<br />
xxii<br />
i<br />
I<br />
B<br />
r<br />
--- .
11<br />
XLV.<br />
XLVI.<br />
XLVI I.<br />
XLVIII.<br />
XIX.<br />
L.<br />
LI.:<br />
111.<br />
LXII.<br />
LIV.<br />
L’i.<br />
LVI.<br />
Page<br />
. . . . Dr. Joseph L. EIoyd, Jr. 689<br />
Racid di~~~.mtccs iu AFQT, ME,<br />
and WAZS SCO'ICS 06 Ati Fo.:cc<br />
bmic a%-'~~co . . . . . . . . . . . . .Capt Stephen B. Knousc<br />
Sgt David F. McGrcvy<br />
Sgt Ronnie A. Thompson 694<br />
Ooe5 the USAF C;i(ic:.t Giogtc$?icax<br />
Invcntoq~ po'~ti011 cj t1:c AFCQT<br />
imfhwUc)ti~y mea5u~tC aridlo.-C-ta7h.n<br />
pL.tiooltaei.tq? . . . . . . . . . . . . . Lt Co1 Charles W. Haney<br />
.Ms. A. M. Kelleher<br />
1 Melvin S. Majesty<br />
Wallace F. Veaudry<br />
rncnt 01 a tni..!.ita.~-/ ~cttotg . . . . . . . . . .Dr. Kay H. Smith<br />
A pticpo6cd JpatinQ c’Licntatim/<br />
diboahttht.ic~t d.fi$l t t~2hb~ing<br />
co,~cpt. . . . . . . . . . . . . . . . . . Mr. Patrick J. Dowd<br />
Minutes of the 1373 b?TA Steering<br />
Committee Pleeting. . . . . . . . . . . . . . . . . . . . . . ,<br />
Citation of the Capt Harry ..i. Greer<br />
Award for 1973 . . . . . . . . . . . . . . . . . . . . . . . ,<br />
By-Laws of the Nlitary <strong>Testing</strong><br />
<strong>Association</strong>. . . .,. . . . . . . . . . . . . . . . . . . , .<br />
List of conferees.' . .I . . . . . . . . . . . . . . . . . . . . 801<br />
i<br />
xxi';i<br />
:<br />
t<br />
. . .<br />
‘... ,_.<br />
.I .,<br />
3<br />
709<br />
7 2 6<br />
731<br />
740<br />
750<br />
757<br />
789<br />
793<br />
795
. .<br />
KEYSOTE ADDRESS<br />
Lt Gcn John Ir'. Roberts<br />
Deputy Chief of Staff Personnel<br />
tfeadquartcrs, Unizcd States Xii- Force<br />
It's traditional for a speaker, particularly a keynote<br />
speaker, to open his remarks with the statement that he's<br />
pleased and gratified to be wherever he is. I can do this<br />
with a considerable measure of sincerity, sinze being here<br />
gives me an opportunity to pass on some thoughts to a group<br />
that holds the key to better use of human resources in these<br />
days of growing demands.<br />
In the last few days, I've supplemented my knowledge<br />
about this audience with, some homework, including a review<br />
of the minutes of your past meetings. Your achievements, both<br />
k<br />
as individuals and as an association, are certainly more than<br />
impressive.<br />
Although I do not speak to you as a psychologist or as<br />
a measurement specialist, I feel a definite kinship with many<br />
in this group. I have worked very closely with people in the<br />
Human Resources Laboratory for the past three years. As<br />
Director of Personnel Plans for the Air Force, and now as<br />
the Deputy Chief of Staff for Personnel, I know that many of<br />
the decisions we have made, and will make, are based directly<br />
on the results of your efforts. Your contributions have had<br />
a significant impact on the way we manage people: only the most<br />
naive would deny that we've been able to build a far better<br />
military establishment with you than we could have without you.<br />
1<br />
. . .
!’<br />
For these reasons, then, I welcome the opportunity to<br />
pass on some of my thoughts and, hopefully, to offe,r some<br />
I<br />
challenges for the future. I'll start with a couple of<br />
observations.<br />
In reading over past conference reports, I find that<br />
your guest speakers can be broadly categorized into two<br />
groups. In the first category are the personnel researchers,<br />
the experts, people like you. They unc'?rstand the behavioral<br />
sciences. They understand the problems and the lead times<br />
involved in achieving effective progress. As a result, they<br />
usually tell an audience like this to ride out the changes<br />
in administrative personnel, to avoid a preoccupation with<br />
the operational demands of today's job, to take the long<br />
range view in the interest of excellence and real progress.<br />
In the second category, ? find people jike myself,<br />
managers, or laymen (in the research vernacular1 trying to<br />
'cope with the day-to-day problems of procuring and managing<br />
\<br />
\<br />
. *<br />
large numbers of people. While WC realize he futility of<br />
!<br />
trying to come up with airtight solutions, we tend to ask<br />
I<br />
you for formulas and tools (what you might call gimmicks)<br />
that will enable us to tie up our problemsiin nice, neat<br />
//<br />
packages. And, of course, we're always in a hurry. We<br />
become a little impatient when your answers are not quick<br />
enough or you try to cualify, perhaps justifiably, the answers<br />
you do provide.<br />
, -<br />
2<br />
I<br />
I<br />
,<br />
v<br />
---<br />
‘\
From experience, I know that the difference in<br />
perspective represented in'these two points of view can<br />
put the manager.and his experts at odds. I would go even<br />
further. They can, and often do, make effec%ive communication<br />
difficult, if noz ic?ossible. The tragic thing is that we have<br />
never needed effective communicatiqn between the researcher<br />
and the manager more than we do today. The demands on our<br />
ingenuity to employ, to challenge, to get maximum value from<br />
those who elect to,join us have never been greater. There is<br />
too much to kno!
effort, or some other factor, the fact remains ,that today's<br />
environment has some profound implications for the armed<br />
forces, especially when it comes to manpower.<br />
We can talk about this situation in terms of tight<br />
money. L3e can point to force reductions, or shortfalls in<br />
military strength. ' What it all means, however, is that the<br />
military, as an institution, must face up to the challenge<br />
of having to manage its human assets better -- to get as much<br />
dedication, as much effectiveness, out of our people as<br />
possible. In spite of the present environment, we must<br />
continue to find legitimate'ways to attract the manpower<br />
we need to keep our forces viable.<br />
It's been estimated that maintaining an all-volunteer<br />
force of 2.2 million over the long haul will require one<br />
out of every three qualified and available men in this<br />
country to volunteer for active military service. Under<br />
present policies, one-third will have to be "above average,"<br />
over one-half, "average." This year alone the armed services<br />
will have to attract better than 400,000 young Americans to<br />
meet our commitments. But the problem goes beyond numbers.<br />
They also have to be the right kind of people. Here is<br />
where we've got some room for improvement.<br />
4<br />
_. -. I, I .~ ,__.. .._.<br />
:<br />
I<br />
. . . 1<br />
. . :<br />
, . . . - I<br />
:. . I’<br />
I<br />
:<br />
i<br />
i<br />
I
For example, out of the average 100 enlistees who enter<br />
the Air Force, 25 leave before they complete 4 y!Lars. Some<br />
are found unsuitable or unfit for military service; some<br />
have disciplinary problems; others leave for personal reasons.<br />
: :<br />
In adtiition to the personal anguish involved, these people<br />
represent time, effort, and money, all of which are in shcrt<br />
supply these days. We know that high school gL-aduates have<br />
fewer disciplinary problems than non-graduates, by a ratio<br />
of one to four. (Yet we know tliat the majority of non-<br />
graduates can do bzell.1<br />
The question is why? I?;lat arc thc:factors working in<br />
this area? Can we, can you, identify them? IS it something<br />
we're doing? Should we try to deal with these people before<br />
i<br />
they come on board or after, or both? /These are tough<br />
questions; but they need to be addressed, and I look to<br />
you 'for the answers.<br />
After we get the right kind of pe ple, we also have to<br />
4<br />
manage them properly. I sincerely believe young people todhj<br />
are as eager as ever to do a good job/<br />
/<br />
I also know they want<br />
20 ask questions. They want to know why as well as what; and,<br />
/<br />
above all, they want to be challen.@ and provided the<br />
opportunity to participate in decisions that affect their
This means we have to CJoid arbitrarily poking those<br />
people into holes. We have to realize and act on the<br />
premise that what may excite and challenge one man or<br />
woman may not challenge another. In the long run,<br />
hsbitrary, Mconvcnient" actions can create a situation<br />
in rqhich an individual who might havt been it tremendous<br />
success ends up being bored and indifferent to his work<br />
or elects to leave us.<br />
Our need ht.re is obvious. We must have the best tools<br />
we can get to classify and assign people. WC must have the<br />
best predictors of success in training and on the job that<br />
you can give us. WC must be as sure as we arc ablt: that<br />
the system is doing everything it can to provide each pr:rson<br />
who corn&s to us the best opportunity possible to grow, and<br />
learn, and progress to the limits of his or her capabilities.<br />
I would like to underline the fact that this philosophy<br />
applies to the minority as well as the majority membera among<br />
us.<br />
Like every other ser-lice, the Air Force is tryiny hard<br />
to satisfy the expectations of its minority wcmbers. NC have<br />
almost 250 enlisted career fields in the Air Force. A little<br />
better t!lan a year ago, 91 of them had less. than a 5 portent<br />
minority population. Today that number is down to about 45,<br />
and we are heading for 0. Like the other services, we arc<br />
working the problem and the effort is starting to pay dividends.<br />
We still can use help, all the help you csn give us.<br />
6<br />
.<br />
. .‘.
What I have been discussinq so far is just a small<br />
fraction of tho challenge we face. Even thia!incompl+to<br />
Y<br />
iist, however, indicates the ireed today for jqqrmsivc,<br />
decisive action. This is the time for the military to<br />
question, to Search, to fin& new and better whys to manage<br />
its human resources, to t.?ke the lead in making its own<br />
unique contributicns to C:;C growth and development of<br />
military people as vital members of society.<br />
We must ask ourselves, What arc the implications for<br />
testing? Xhat role should tests play in this coming decade?<br />
Of all the many specific uses to whicjl tests arc ~)*.zt, is there<br />
any one direction or unifying thcmc? I bclicvc there is. I<br />
think there is a clear direction in which testing should<br />
develop, and I think there is an unddrlying theme for that<br />
i<br />
b<br />
,'rection.<br />
I<br />
It seems to de that the requirement to
44 Which means what, specifically?<br />
First, I think it means that tests of the future must<br />
focus much more on the individual, on his unique talents<br />
'and desires.<br />
Second, it means fittir.3 tests into a scheme that<br />
maximizes the-opportunity for c!loice, both by the individual<br />
and t,y his employer.<br />
--<br />
). bird, as a function of the first two, it means that tests<br />
must provide broader profiles of information. Four or five<br />
aptitude scores do not provide a sufficient basis for the<br />
kind of cozqlex, comprehensive career programs we have today<br />
and will need in the future.<br />
The role cf tests, now and in the future, should be<br />
to supply a vital piece 0f the information we need in properly<br />
managing the careers of our people. The quality and breadth<br />
of this information will be crucial to the quality of the<br />
decisions he make, but tests must not serve to prejudge the<br />
individual for a full career. They should clearly be a tool<br />
in the decision process. By that I mean that they should show<br />
the implications of various alternative decisions.<br />
For example, tests are needed, especially with the advent<br />
of the new instruction systems, that can recommend specific<br />
kinds of learning experiences that can be applied to an<br />
individual case. Lee Brokaw, your program chairman, put it<br />
well recently when he said, 'We must shift emphasis from finding<br />
the best indfvidual for the job to find the best job for the<br />
individual."<br />
t<br />
8<br />
3<br />
. : .: .<<br />
. -. . . ,. ,.<br />
_
New systems are also needed to provide a more<br />
balanced role for tests, roles that clarify where tests<br />
fit in career development, roles that make sure the<br />
limitations of testing arc equally clear.<br />
We also have to remember that testing policies and<br />
practices must have a relationship to this nation's goals<br />
in the area of sociai policy. The Civil Rights Act of 1966,<br />
the Supreme Court decision in the Duke Power Company case,<br />
and the Equal Employment Opportunity Act of 1972 all really<br />
make one point. Equal opportunity means giving.all Americans<br />
a chance, and this chance must be more than just a chance to<br />
fail.<br />
As Lyndon Johnson said in the last address he made<br />
before his death, "History !las not been equal for all<br />
Americas." We recognize now that tests can be used<br />
negatively. They can perpetuate a cycle of assignment to<br />
jobs with limited growth potential. They can confirm the<br />
"inequality of history," in Lyndon Johnson's terms, by<br />
making past difficiencies the basis for continued unfairness.<br />
In short, they can become sentences instead of doors to wider<br />
growth and development.<br />
9<br />
.- .-.<br />
/ -_.<br />
. : .
-\ * . .<br />
Ladies and gentlemen, if we are to have viable armed<br />
forces in the future, we must come up wiCh reasonable answers<br />
to the questions and problems ccnfronting us in personnel<br />
management. You know our requirements; you know how urgent<br />
they are.<br />
i!<br />
Today, as your keynote speaker, I ask you to provide us<br />
the tools we need to satisfy th?-e requirements. lu'hen you<br />
bring us the tools, tell us how to se them and how not to<br />
use them, and make sure we listen. As<br />
- - - - a manager with a<br />
primary interest in people, I can tell you we do nerd help,<br />
-<br />
,,<br />
the kind of help only you can provide. I ask that you come<br />
forward and tell us what should be done.<br />
For the shcrt term, give us the best answers you can.<br />
I<br />
Remember, we can't always wait for the research effort to<br />
be neatly organized, the r"clght researchek to be found, the<br />
research effort-to be conducted in a Simon pure environment,<br />
etc. Very often we need answers now.<br />
I<br />
I challenge you to<br />
i<br />
respond quickly when the situation deman s.<br />
P<br />
For the long term, identify the arc/as where we should<br />
encourage further research: research that is relevant;<br />
research designed to tackle the tough &oblems of tomorrow,<br />
not yesterday's problems; research that will enable us to<br />
achieve progress that is real and enduring.<br />
10<br />
-.<br />
t 2; F . .<br />
1. ..,<br />
-<br />
.<br />
. i ‘,<br />
: I’<br />
,<br />
00033
And finally, let the experts and the managers (you<br />
and I) join forces, with the knowledge that there is<br />
immediacy and urger.cy in the work we do, that there is<br />
excitement and opportunity in improving the management<br />
of the most precious asset entrusted to any organization.<br />
Xow may not be the best of times, but it is OLX time.<br />
-<br />
Let's use it wisely.<br />
11<br />
k<br />
._ ._<br />
-. . I .<br />
'r
.<br />
Participants:<br />
Synopsis<br />
MTA Symposium - Cooperative Aptitude Resedrch<br />
Programs Conducted through the Armed<br />
Forces Vocational <strong>Testing</strong> Group<br />
Dr. Harry D. Nilfong, AFVTG<br />
5la.i Verna S. Kellogg II, AFVTG<br />
Capt Kevin T. Gambreil, Jr., AFVTG<br />
Dr. Lonnie D. Valentine, AFHRL - Chairman<br />
Introductory Comments (Dr. Valentine)<br />
1. The Armed Services High School Test!nq Proqram has gradudlly<br />
evolved since 1966 into a program of sizable imoortance.<br />
2. In today's all volunteer era, it will probably assume greater<br />
importance to the services as a constructive point of contact between<br />
the services and the educational community with its pool of young people<br />
entering miiitary age.<br />
3. For about the past 6 months, the High School <strong>Testing</strong> Program<br />
has been under management of the Armed Forces Vocational <strong>Testing</strong> Group<br />
at Randolph Air Force Base. Other participants in today's symposium<br />
are from that organ'zation.<br />
4. After the participants have each briefly discussed some<br />
aspect of the program, the floor will be open for audience questions<br />
and suggestions. It is hop.?d that the discussion period will serve<br />
to elicit constructive suggestions for the program.<br />
Program History (Major Kellogg)<br />
1. A popular stereotype of the recruiter is the "lifer" bodysnatching<br />
young men from the qhetto and the juvenile courts and<br />
channeling them to the rehabilitation environment of the military<br />
services. Nothing could be further from the truth. The services<br />
need “quality" human resources--military services today demand<br />
trainable, educable and alert individuals.<br />
2. The sophistication of today's weapon systems requires the<br />
military services to expend more time and money training a man to<br />
an accepttule skill level than they can anticipate realizing from<br />
his services. Thousar,ds of young people leave the military service<br />
after their first e,llistn?t. Return on the taxpayers‘ investment in
. - . .<br />
them is realized mainly in terms of the skills they take back into<br />
civilian jobs. 1<br />
3.<br />
military<br />
directly<br />
Because of this requirement for quality manning, the<br />
services have tied their recruiting and enli'stment programs<br />
to the measurement of aptitude for vocational training.<br />
4. The Department of Defense, this year, directed the formation<br />
of the Armed Forces Vocational <strong>Testing</strong> Group--to direct, manage<br />
and guide the Department of Defense High School <strong>Testing</strong> Program.<br />
This program is the cornerstone of recruiting the all-volunteer<br />
force.<br />
5. The primary tool of the testinq program is the Armed<br />
Services Vocational Aptitude Battery, c;mnonly referred to as<br />
ASVAB.<br />
6. ASVAB is the product of over 30 years research and<br />
development in the area of military classification--mxt of it<br />
accomplished by people in this room. Today's en?istee is carefully<br />
tested for identification of his potential for training.<br />
7. The genesis of the High School <strong>Testing</strong> Program goes back<br />
to 1958, when the Air Force recruiters struck 3n the idea of going<br />
into high schools with a Vocational Aptitude Battery, enabling<br />
them to identify specific youngsters with specific aptitudes as<br />
potential recruits. With the success of the ,Air Force Program, the<br />
other services initiated sidlar programs. The competition caused<br />
confusion and projected an unprofessional image of the military .<br />
services.<br />
8. In 1966, the Department of Defense directed that the services<br />
combine their high school efforts.<br />
I<br />
9. A group of service behavioral scientists n-et and the<br />
eventual fruit of their efforts was ASVAB--ASVAB combined the best<br />
and most desired features of all the militarb classification tests<br />
and gave the High School <strong>Testing</strong> Program a product usable by all<br />
the services.<br />
10. The concept was good--but differedi parts of the action<br />
were assigned to the various services--theYArmy was tasked to _<br />
develop the test, the Navy was tasked to provide counselor's<br />
materials and the Air Force supervised the scorinq and return of<br />
the test to the counselor. Recruiters went back into the schools,<br />
all using the same test, but still with their parochial interests<br />
at heart. Thus, ASVAB became an Army test, a Navy test or an<br />
Air Force test depending upon the representative who got there first.<br />
13
Both recruiters aid counselors lost sight of the value of the<br />
battery in helping young people p:an their futures--and got<br />
Ii hung up on the idea that ASVAB was only a means to get names<br />
of potential enlistees.<br />
11. The Armed Forces Vocational <strong>Testing</strong> Group is the answer<br />
to the problem. Staffed by personnel of all the military<br />
services--Army, Navy, Marines and Air Force--the group provides<br />
a single manager and interpreter of ASYAB for the Department<br />
of Defense High School <strong>Testing</strong> Program. Dealing witn t+e local ',<br />
recruiter through an interservice recruitment committee (organized<br />
by geographical location and represecting each service) the group<br />
assures equitable and professional atiinistration of ASVAB.<br />
12. The mission of thr? Armed Forces Vocational <strong>Testing</strong> Grouo<br />
is to place ASVAB ifl the schools, asslrre its currency and validity<br />
and prov‘.le the student and counselor with assistance and<br />
information. Our job ends there;
tests for the joint battery from a pool of test items composed of<br />
those in the interchangeable components of the existing service<br />
batteries and (C) standardization of the resultant battery.<br />
4. To c;tablish test interchangzability, all of the service<br />
classification batteries were administered to a common sayiple of<br />
3900 Lasic trainees from the Army, havy, Air Force and Marine<br />
Corps. <strong>Testing</strong> was arranged for three separate days,<br />
with only 017e of the service batteries being administered on a given<br />
day, and battery administration was counterbalanced to eliminate<br />
effects of test practice.<br />
5. A stratified subsample, re;lresentative of the full range<br />
of ability on the Armed Forces Qualification Test, was used to<br />
establish intercorrelations among all of the tests in the various<br />
service classification batteries; this resulted in the identification<br />
of seven interchangeable content areas. Two other areas were<br />
added to ASVAB to yield an AFQT equivalent.<br />
6. The nine ASVAB tests yield five aptitude composites which<br />
are reported to high school counselors through standard format.<br />
These are:<br />
. (A) Electronics - describes students in terms of abilities<br />
relevant to electrical/electronic occupations. The aptitude score<br />
consists of tests dealing with electrical information and understanding<br />
mechanical principles. ,<br />
(B) General Maintenance - describes capabilities relevant<br />
to a variety of mechanica. and trade jobs. The composite consists<br />
of tests assessing she:, information and spatial ability,<br />
(C) Motor Maintenance - is concerned with ability for<br />
engine repair and related jobs and measures automotive information and<br />
understanding of mechanical principles.<br />
(D) Clerical - measures ability relative to clerical/<br />
administrative occupations. Composite concerned with verbal ability<br />
and clerical speed and accuracy.<br />
(E) General <strong>Technical</strong> - describes students' ability for<br />
occupations requiring academic ability. This aptitude is taken from<br />
verbal and mathematical components of battery.<br />
7. The battery contains 3C0 test items; 100 in the Coding<br />
Speed and 25 in each of the remainjng eight tests. Actual testing<br />
time is 112 minutes; generally two and one-half hours are required<br />
to administer the battery. Speed is not emphasized, although all<br />
I<br />
15<br />
._. _.-<br />
.s<br />
‘. :<br />
..I:.<br />
. .
.- -- .<br />
rests are timed. 1)<br />
e<br />
8. Te::', items contained in Form 1 of the ASVAB (with the exception of<br />
those in Coding Speed) were selected from an item pool consisting of<br />
a'1 ite:ns cortained in the service classification batteries used in<br />
the '~inte~.~lla.lgeability" study. Criteria for item selecticrl were<br />
difficulty level (proportion of examinees responding correctly),<br />
discrimination level (ability of the item to discriminate correctly<br />
betw:?tin in4jivilluals irho score high and those who score low on the<br />
relevant abi'lity as reflected in the item's correlation with other<br />
items of its type), ar,d content validity.<br />
9. T!c 25 items in each subtest (except coding speed) were<br />
2r?3;iCJl?d
epresentative sampling of 36 thousand cases producing separate norms<br />
by geographical region, sex and high school grade, It has been my<br />
experienc.:, thus far, that these are the three most common comparative<br />
variables requested by high school counselors.<br />
4. The other ASVAB research requirement is to expand the<br />
listing of civilian/military occupations as cataloged in Volume 2<br />
of the counselor's manual. What is being planned - assuming sufficient<br />
lead-time to permit publication for the 1974-75 sch,ool year--as a<br />
two-columinar hierarchical listing of representative civilian and<br />
service jo5s anchored to aporopriate dictionary of occupational titles<br />
numbers,<br />
5. Long-range research needs identified in support of the ASVAB<br />
program are too-numerous to cite during this brief presentation.<br />
Howevtir, I would like to touch upon some of the more significant efforts<br />
that are planned.<br />
6. First, content revisions are being considered, including<br />
deleti.on of the tool knowledge subtest, possible shortening of<br />
some of the other portions of ASVAB and the insertion of an<br />
occupational interest schedule. Tnese changes should appear in<br />
Forms 5 and 6 of the ASVAB, hopefully, to be available by the<br />
1975-76 school year.<br />
7. It is the general consensus of researchers exposed to the<br />
AHAB that a highly desirable research undertaking would be development<br />
of current mobilization base data--representative of modern America<br />
on as many strata as possihle. What is envisioned is a study broad<br />
enough to serve many masters and the needs of many agencies while<br />
simultaneously substituting for the old 1942-44 mobilization data<br />
base. The many problems involved in planning, securing approval<br />
for, and crecuting such an ambitious undertaking are well recognized,<br />
8. Another, equally critical, avenue for research exploration<br />
i's to establish a mechanism for conduct of a series of longitudinal<br />
validation studies tracking students at successive staqes along their<br />
career path, both within and outside of the military service. By<br />
its very nature, the ASVAB program presents the ideal vehicle for<br />
performing such research.<br />
9. A wide scenario of research support requirements has been<br />
identified for the ASVAB program, including subjects as listed in<br />
Attachment 1, Armed Services Vocational Aptitude <strong>Testing</strong> Research<br />
and Development Support requirements, of the joint services<br />
regulation covering the Armed Forces Vocational <strong>Testing</strong> Program.<br />
17<br />
?<br />
*
10.<br />
Research needs identified by high school counseling personnel<br />
and by service recruiting, production and classification representatives<br />
'range across a broad spectrum, covering such'top!cs as: tracing the<br />
longitudinal career decision process; exploring the feasibility of<br />
Iremote test scoring and concurrent feedback; establishing a task/<br />
aptitude criterion matrix, permitting, in turn, rapid estimation of<br />
predictive validities for new jobs entering the inventory; reconfigured<br />
.jobs, or jobs having an insufficient N for traditional validation<br />
purposes; and a linear programned approach to test administration.<br />
11. Rather than describe the entire spectrum of our anticipated<br />
research objectives at this time, I would prefer to return the floor<br />
to the workshop chariman so that we can explore subjects in which<br />
you may have a particular interest.<br />
Concluding Connents (Dr. Valentine)<br />
1. My.few corrnients represent my personal views--not necessarily<br />
policy of AFHRL or the Air Force.<br />
2. AFHRL's role, as well as that of the other service R&D labs,,<br />
in the program is viewed as one requiring sound R&D to supnort and<br />
improve the program. Policy with regard to the program is the<br />
responsibility of service and DOD ffq agencies.<br />
3. There are three areas in which substantative support R&D<br />
is needed. These deal with (a) breadth of battery content and<br />
coveraqe, (b) battery standards (which include both norming and<br />
composite structure), and (c) va?idity studies.<br />
4. In its oresent form, ASVAB is limited to subtests either<br />
"common" to earlier service classification batteries or necessary tc<br />
provide an AFqT score. Most of these predecessor batteries were<br />
designed to provide ontimal prediction of training success. Such<br />
composites tend to intercorrelate substantially. School counselors<br />
have always been constrained to counsel all students, and the optimal<br />
test system for them both identifies ability level and provides optimally<br />
different classification indexes. The services have been able<br />
to select rather than to simply classify in the true sense.<br />
IJnder the volunteer force the services are going to be compelled<br />
to join counselors in a more effective job of differential<br />
classification. In addition, the ASVAB's present limited content<br />
coverage short-changes some of the classification needs of some<br />
of the-services. Thus, the service labs need to give careful<br />
attention to appropriate added content and to the classification<br />
models ?lhich will allow decisions about both leve1<br />
of aptitude<br />
and thz differential aptitude demands of jobs<br />
18
5. The mobilization population standards against which the<br />
services typically calibrate new tests is now 30 years old, and it<br />
does not take into account special subgroups of particular interest<br />
for special programs. A broad based national normative;census,<br />
with provision for identification of standards aopropri,ate to<br />
particular population subgroups, is needed. Some counfelors have<br />
expressed dissatisfaction with the service's 194'1 mobilization base<br />
standird and have,maintained that they want standards based on high<br />
school age youth,<br />
,<br />
:!<br />
6. Substantative ASVAB validation studies not only against<br />
service criteria but against civilian career criteria as well, are<br />
needed.<br />
7. Careful attpntion to these three &reas, with cooperative<br />
planning and research by the various service labs, can lead to development<br />
of a comprehensive and flexible test battery capable of satisfying<br />
both service and counse?or need->, and can be an invaluable recruiting aid<br />
to all the services, It’s accomplishment will', however, require that<br />
the services lay aside parochiai interests, support the appropriate<br />
IlleD labs with the manpower and other resources needed for its<br />
accanplishment, and encourage long range cooperative planning and<br />
research.<br />
(The symposium was concluded with an audiejce question and<br />
reaction period.)<br />
19<br />
'1 I
DEXEtOPHEX~ OF THE ARNED SERVICES<br />
VOCATIONAL APTITUDE BATTERY i<br />
W. A. Fischl, Ph.D.<br />
U. S. Army Research Institute for the Behavioral and Social Sciences<br />
.<br />
The Armed Services Vocational Aptitude Battery (ASVAB) is the test<br />
battery administered in the joint services high school testing program.<br />
In this program the services offer to administer a battery of vocational<br />
aptitude tests in any high school that requests it. Trained test adminis-<br />
trators visit the school and administer the battery,.the tests are scored<br />
centrally, and results are returned to the high school guidance counselor<br />
for transmittal to the student. This program is provided to high schools<br />
that request it, at no charge either to the school or to the student.<br />
Identification of Content<br />
The tests employed in this program were devclop.ed to be a set of<br />
. short alternate forms of the common elements of classification test batteries<br />
in use by the services. At the tine of the original development each of the<br />
servjcss was administering a classification battery of nine or ten aptitude<br />
tests. To identify- the.cozmon content material of these batteries, the batter-<br />
its of the Army, Navy, and Air Force were administered to a national sample of<br />
3900 enlisted men during their first few days of service. Each enlisted man<br />
took the tests of all three batteries, in a counterbalanced sequence, one<br />
hattcry a day, over a several day period. The sample was subjected to statisti-<br />
cal adjustment to equate for diffcrenccs in screening methods and acceptance<br />
standards among the services, and the test scores were intercorrclatcd<br />
to determine which test content areas were sufficiently sinilar as<br />
to comprise the core of a joint test battery. The criterion of similarity<br />
20
was set as a correlation cocfficicnt of 0.9&‘* and th$s analysis identi-<br />
Eied the r=ven test content areas of verbal fluency, quantitative<br />
reasoning ability, spatial visualization, mechanical cbmprchension, shcp<br />
I<br />
inftirmation, automotive ‘information, and electronics ,inf ormat ion. An<br />
eighth content arca, clerical ability, uas added because of its impor-<br />
tance for counseling and classification, even thougbthc service tests of<br />
this function did not meet the statisrical criterion. Finally, knowledge<br />
ahout hand tsols was added as ;I content area so that all four arcas<br />
assessed in the Armed Forces QualificntL>n Test (AFQT) would be rcprcscntcd<br />
Construction of the Original Tests<br />
For all bu: the clerical content area, 2> items, spanning a full<br />
far.;.= sf diffic-zlty, vcrc sclrctcc! fr,:: t!v2 c?craticc21 tests cf the<br />
services. These items were assembled so as tb compiise new iorns of the<br />
/<br />
I<br />
service tests in the respective content xea.sL The clerical ability test<br />
was assembled by taking. i&t, the most val"d of the service tests in .<br />
r’<br />
this area. It is a 100-item speed tcsti all of the others are power tests<br />
.<br />
and are, with the exception of Tool Knowledge; I half-length forms of the<br />
scrvfce tests in the counterpart areas.<br />
:'he 001 Sncwladgc test contains<br />
f<br />
the same number of items as was contained in the Tool Knowledge pcrtion<br />
of the then oyerJtiona1 AFQT.<br />
Exhibit 1 describes the contents of th’!! nire tests of the battery.<br />
Q .<br />
To obtain norms, these tests were administered to a r.ationnl sample<br />
Insert Exhibit 1 about here<br />
L/ Corrected fer restriction of range and lack of pcrfcct reliability.
I.<br />
’ 2.<br />
3.<br />
4.<br />
5.<br />
6.<br />
7.<br />
8.<br />
9 .I<br />
EXHIBIT I<br />
TESTS iN THE<br />
ARteP SERVICES VOCATIONAL APT; IUDE B4TTERY<br />
CODING SPEED TEST (CS>. IN THIS TEST =ERE IS A KEY AND IO0 ITEMS.<br />
THE KEY IS A GROUP OF WORDS WI TH A CODE N’UNBER FOR EACH \
of 3050 Selective Service registrants, stratified to represent the<br />
no'pulation of young men of military age.<br />
Co,nstruction of Successor Forms<br />
Coincident with introduction of the original form of the battery in<br />
the school year of 1968-69, work began to develop two entirely new forms.<br />
For each of the eight poh-er test-&./ six times the required nunbcr of items<br />
were prepared. T!~is is customary procedure, preparing a substantial overage<br />
in anticipation of attrition in the item analysis stage. These items were<br />
administered to several national samples strarified to represent :he popu-<br />
lation of military age young men. The total fi was 4000 cases, 18% of<br />
whom were bJ.acks, and item analysis statistics of difficulty and homo-<br />
geneity were obtained. Using these statistics, items were assembled into<br />
two parallel 25-from forms of each of the eight power tests. Each test<br />
was constructed to be of wife ranging difficulty, with items spanning<br />
virtually the cntirc range of 59 per cent difficulty (very easy) to 1<br />
per cent difficulty (very hard) in the reference population'. h'hen coupled<br />
w!Lth the new forms of the speed test, there were two entire?y new nine-<br />
test batteries..<br />
These batteries were administered to several stratified national<br />
samples of Selective Service registrants, one forn to a sample. The<br />
total 2 was 3500 cases, virtually all whom were betxecn 18 and 19 years<br />
of age, 80% of whom had completed between 10 and 13 years of education,<br />
and 15.5% of whom vcr'c black. From this administration percentile and<br />
Army Standard Sccru norms were developed, and test reliability coefficients,<br />
intercorrelaci7ns, and other characteristics r;ere derived.<br />
l/The speed test was prepared in Its final forms, because item analysis<br />
pf a speed test is not meaningful, and item attrition 53 not incurred.<br />
i<br />
23<br />
.A -<br />
.._. --.<br />
: .
Characteristics of the Battery<br />
The entire battery requires approximately 2 l/2 hours to administer,<br />
!<br />
The longest test, Arithmetic Reasoning, takes about 25 minutes; the<br />
shortest, Coding Speed (clerical aptitude), takes 7 minutes; most of the<br />
t<br />
others are lo-minute tests.<br />
Reliability coefficients are presented in Table 1, and test intercorrc-<br />
lations appear in Table 2. The intercorrelations are of intermediate<br />
Table 1<br />
ASVAB Reliability: Coefficients of Equivalence and Internal Consistency<br />
for Each Test<br />
‘Test Equivalence 1/ Internal Consistency 21<br />
Coding Speed .83, .86 h‘ot Applicable<br />
Ir!ord Knowledge .80, .85 .87<br />
Arithmetic Reasoning .81, .86 . .87<br />
I<br />
Tool Knowledge ;79, .76 .78<br />
I<br />
Space Perception .82, .84 . 84<br />
"<br />
Nechanicol Comprehension .77, .73* 1 .83<br />
Shop Icformation‘ .67, .65 .81<br />
Automotive Information .75, .74 :<br />
.85<br />
Electronics Information .80, .79<br />
1. Correlation of Form 2 with Form 3, in two subsamples, Form 2<br />
administered first in one (IJ = first in the<br />
other (2 = 514).<br />
2. Utilizing Ruder-Richardsonfirmula 20, g = 616.<br />
(insert Table 2 here)<br />
, .’<br />
:’<br />
24<br />
i<br />
.83<br />
_,
t<br />
.<br />
‘.<br />
-.: __<br />
_ - .--<br />
25<br />
3<br />
N<br />
,
magniture, with the perceptualmotor test (Coding Speedj being generally<br />
most independent of the rest of the battery.<br />
The reliability coefficients stem reasonably high for tests of only<br />
25-item length, and pred.ictive accuracy is enhanced operationally by<br />
utilizing combinations of sever31 tests. h’hy the Shop Information test<br />
yielded the lowest equivalence coefficients in the battery is not apparent,<br />
especially since its internal consistency coefficient is of comparable<br />
magnitude to the rest of the set.<br />
Occupational predictions fron ASVAB arc generally made utilizing<br />
combinations of several tests. These combinations, both with ASVAB tests<br />
and their counterparts in the services’ operational bdtteries, have undcr-<br />
gone extensive validation study against criteria of training success in<br />
large numbers of job specialties. The modal validity cocf ficicnt against<br />
these criteria has been of the order 0.60, with a range of 0.35 to 0.75.<br />
In summary, the ASVAB is a nine-test aptitude battery developed to be<br />
shortened alternate forms of the ccrrron elements of the then service classifica-<br />
tion bnttcrics. It has been empirically developed, and tried out on large<br />
representative.samples of the population to which it is intended to bc<br />
applied. The original form was in USC in the joint service high school<br />
program for five years, Form 2 has been operational in the schools since<br />
the beginning of this school year, and testing goals are in excess of a<br />
million students by the end of the year.<br />
26<br />
: . -.<br />
. -..<br />
T..’ .,
“hXW DEVE.LOP?lENTS IN DEFENSE lANGl%E PROFIC IEYCY TESTING”<br />
PRESEKTED BY DR. C.D. LEATHERHAN ASD DR. A. A&-HAIK OF THE<br />
DEFEKSC LASCUAGE 1PiSTlTU-l-E !’<br />
1973 MILITARY TESTING ASSOCIATION COSFERENCE, SAN !ANTONIO, TEXAS<br />
28 OCT - 2 KOV 1973<br />
f<br />
Dcfrnsc l~ngu.~gc Prnfici.ctxy Tests, commonly referred to as DLFT’s arc tests<br />
drt;i~;nr-d to l:t;nsurc the language proficiency of all DLI graduates, as well<br />
as DOD-conncc;ed individuals clniminp, proficiency in a particular forcigu<br />
I.-tn~t.:a~e. Specifically, the DLPT serves c*10 ;.ljor purposes:<br />
a . To measure the listening comprehension and reading comprehension<br />
skills in a foreign lnngu.?gi*. Thcsc two skills range from minimum ability<br />
whi cl1 rh?y .5r useful in an activity such as directing traffic, to the skill<br />
nccdrd by a compctcnt interprctor. The test does not discriminate among<br />
the higher language skills rc.auirrd by a top-level isimultal:rous tranaln tor.<br />
I<br />
1~.<br />
To evaluate and rate the cxnminec’s abi?itd t0 meet the lillglIiStiC<br />
9<br />
rcquircmcnts for particular jobs, ‘such as interijre$or, interrogator, and<br />
translator. ?he test is not intcndcd tu assess other skills, such as<br />
intrrrogntion techniq~cs, w!:ic!: cay bc rrquircd t:y these jobs.<br />
I<br />
The history of military proficiency testing goes back to 1948 when the Army<br />
I irs,L developed what was then called The irrmy La tr” guage Proficiency Tests.<br />
Tests in 31 languages wcrc developed and used between 1948 and 1953. Each<br />
trst consisted of four parts: Parts I and TI covered understanding of the<br />
spoken langu?gc; Part 111 dealt with readin&; ability: and P:!rt IV mcnsured<br />
writing ability throug!l I.:uawlcdCe of grarrsr;lr. Due to n.ilitary pressure.+ at<br />
27<br />
I<br />
i<br />
_<br />
. . . .<br />
,
the time, the development of these tests involved only a niinimum of<br />
II<br />
research. During and following tlw Korean War it became apparent that<br />
the tests, vcrc not ;~Iw;~ys rcl inhlc in discriminnt ing hetwwn individuals<br />
of varying levels of 13nf;uagc ability. As a result of this recognized defi-<br />
cicncy, Ar111y USC c agencies rc*qucr;tcd the devcloplwnt of new Arm:’ Langunge<br />
l’roiicic:,jry Tcs1.s. Accordingly, in 1.955 ci’hc Adjutant Gcncrnl’s Office was<br />
(the predecessor of The Dcfcnse language Institute) was subsequently directed<br />
td preparc ln:,~:uagc tests, hutlr to rcplacc esisting i.~~strinnrnts and t o dcvclcrp<br />
new tests: ‘in langu.il;es for xhich tller(. wcrc no tests at tllnt tinrc. EVCIl<br />
though t!;e new prototype tests in Russian and Chincsc -Xxnd,7rin were carcfltlly<br />
clc~~lopec! a n d validated, the ot.hcr new tests, npproxir2ately 35 in number,<br />
consisted mainly of translations of lhc tx0 prototype tests and wet-c not<br />
subjected to systematic validation procedures before their ictroduction into<br />
tlbc Dcfrnse Language Progrnx in 1958. As a rcsul L, Russian proficiency test<br />
was used as the model ior all European languages and the Chincsc-?bndarin<br />
test was used for all XsiaLic 1an~;u:ly.c:~. It became apparent following<br />
extensive USC during the sullscquent ycnrs that these 38 additional tests wcrc<br />
not as valid or as reliable as had been assumed. Additionally, the USC of a<br />
skglc test form for each of thcs’c tests over mxc than a decade further<br />
reduced their cffcctivcnc: :. .i::r to possible test compromise.<br />
/<br />
,<br />
Recognizing these<br />
defi.cicncies, The Defense L3njiuage Institute initiated a series of new projects<br />
in 1366 aimed at developing follow-on DLPT IIs, or second generation DLpTs to<br />
replace the old ones. The fi.rst group of DLPT LIs, wcrc developed under<br />
28
contract with the Educational <strong>Testing</strong> Service, utilizing language exptrts<br />
from the DLI and from various universities, in addition to resources from<br />
ETS . Scvcn high-cnrollmrnt languages wcrc included, such as Russian,<br />
Spanish, Arabic, etc. These initial DLPT 11s are introduced into the Defcnsc<br />
Language Program at the present time. As these tests are introduced into LIIC<br />
system and prove to bc operationally sound, other language proficiency tests<br />
and al ternatc forms wil 1 then tic introduced into the system. Corrznt plans<br />
call for the dcvclopmcnt of two or three alternate forms as a minimum for<br />
each high density DLI’T II. DLI’s MD plan includes the construction and<br />
valic!;llion of a computer bank of tesL items for each language, from which<br />
multiple test forms can be asscmblcd automatically and actually produced<br />
in final form, .)oth test bonklcts‘and audio tapes, by co,?lputrr. These wcrc<br />
the facts and some of the events which led to our current efforts with r?g;trd<br />
to second gcncrnt ion DLPTs. Additional details concerning specific aspcc!!;<br />
of the dcvcloyncntnl work program are given ‘belcw. i<br />
?hcsc early prototype tests were developed by Army Language School pcrsonncl,<br />
in cooperation wilh rcprcscntativcs from the Personnel Research Branch of The<br />
Adju.tant General’s Office. Each test has two parts: Listening Comprehension,<br />
consisting of sixty items and Reading Comprehension also consisting of sixty<br />
items. All items nrc mulliplc-choice type, each consisting of a lead sentence<br />
or a short paragraph in the foreign lansuagc and four optional responses<br />
printed in English in the test booklet. Three of t&se options are plausible<br />
distracters yhcrc;ts, only one is the correct answer. The options can be words,<br />
phr.?ses, scntcncc Lrnns la t ions, or paragraph gistings depending on the objcc-<br />
tivc of the itml. Listening comprrhcnsion item lpads are voiced on a tape,<br />
29<br />
I<br />
/<br />
.<br />
.<br />
‘. : :-.<br />
. . .<br />
-.
ut rcoding comprehension item leads arc printed in the test booklet. After<br />
completion; the tests wcrc field tcstcd, including a validation !measurc, with<br />
Army ~~JCS~~~I?IKFI possessing varying c?cgrecs of skill and experience in the two<br />
r<br />
languages. The validnt ion instrument consisted of job-oriented F tasks such<br />
as intcrprcling and translating.<br />
1<br />
The median correlation between the Chinese<br />
test and tltc crilcrion mcnsi~rf VdS -70 and hctwcen the Russian test and tllc<br />
: :<br />
criterion c~ensur~ .7s;. The K-R 20 rclinbiliLy of the Chinese test was .96<br />
a n d the Russia11 t.c:,t .95.<br />
Language specialists at The Army Language School wcrc furnish4 English<br />
translatihlls of the two prototype tests mentioned previously and t;crc then<br />
instructed to use these translations, item by item, i.o’drvLJoping the new<br />
tests. Since the cordi Lions for dcvinting fro:3 the givrn translations were<br />
few and rathrr rigid, Lhc resultant new Lcsts WC:rc essrI?nti.nlly parallel in<br />
i<br />
many ways: to the prototypes. IJnfortugItcly, not only Get-c the options identi-<br />
cal,<br />
I<br />
i<br />
but the correct option rcnairicd as the snmc key across all tests. It w3s<br />
subsequently ‘discovcrcd that this proccdur~ increased the* ChUCcS for test<br />
c0mpr0niise, much murc than usual. The reason is that<br />
any DLL graduates would<br />
cithsr subsequently rrcnroll in a different language Course<br />
t<br />
or take a DLPT to<br />
B<br />
establislh military competency in a foreign languagr studied clscuhcre.<br />
Fur thcrmorc, an crroncous assumption was more than<br />
that since a11 the tests were parallel in content<br />
therefore all have the same lt;~l of difficulty. For this reason, no attempt<br />
was made LO standnrdizc raw scores and the snmc skill-level cut-off scores<br />
were used across al 1 lnnguagcs. This initial assumption was unfortunate<br />
and was incorrect as later validation data di.sprovcd the theory.<br />
30<br />
c<br />
..,<br />
.: :!<br />
‘, ;:<br />
/<br />
.’<br />
i<br />
i
ii DE\~LI.OF>lE:.T OF S):CO:;D CE:XRATIOK DEFEWE LASCllACE PROFICIENCY TESTS (DLFT II)<br />
It was considcrcd essential to correct the operational situations described<br />
nhovc and to rcpl3cc the then cl:rrcnl tests which were outdated and<br />
possibly compromised through the ovcrusc of only one test form per language.<br />
Accordingly, second Kcocrcttinn DLPTs ilrc being dcvclopcd, printed :tnd<br />
1<br />
syst~rx~ticnll~y intrvduccd into Thcb Il(~fcnsc Language Program during o three<br />
to four y-car puriod. lhc Iii;’ sccontl Rcncration tests have the following<br />
characteristics:<br />
il. Although they arc identical in format to the old test, the content<br />
of cacll is uniqlrc and indcl:a,ndcnt from nny other. Test content is bilsed<br />
strictly on a description of skill-lcvcl, ObjcctiVFS f o r rach longua{;c.<br />
h. Current plans specify ztt Icast lhrfc alternate forms for high-<br />
density languages and tvo forms for low-cnrollcent languages.<br />
c. All test forms xi11 be subjcctcd to item onnlysis and form-equating<br />
procedures.<br />
d. The validarion of inch test wi.11 be based on the collection and<br />
anslpsis of field dnta conctrninh job-pc’rformance ratings.<br />
c. Each newly developed test will be independently normcd and new skill-<br />
lcvcl cut-off scores will be cstablishcd.<br />
.<br />
f. Raw scores will be converted into standard scores to achieve mutual<br />
ccmparabili ty for languagks which have varying degrees of difficulty.<br />
31<br />
.
DLL p,m PH~OKITY 1 (DLISDA, p - 0 0 1 5 )<br />
(CN) Chincsc 01)<br />
(‘UC)<br />
Arabic - - (Kodcrn Standnrcl)k<br />
T:j;ypL isn<br />
(A J’> Syr i.an<br />
I ,*a q i<br />
Saudi<br />
(FR) French<br />
(CA) Crrrzlrl<br />
(.I,\) .13it311( .‘.l!<br />
(,LA) Spanir;h (LA)<br />
(Td) Thcl i<br />
(VN) Vietnnrncsc ( S )<br />
(VS) vicctnnnK?sc (S)<br />
(RU) Russian<br />
(KP) Korean<br />
(CM) Chincsc-Mandarin ”<br />
(Simpl ificd ch:lr;lctc.rs)<br />
(SD) Chiwsc-Amuy<br />
(originally Pric,l-itp 111)<br />
‘I\..-0
.<br />
DLI R&D PRIORITY XII (DLISW!, P-0130-1 '2'<br />
(AB) Albanian (PT) Pro tuguesc (European)<br />
(BU) Bulgarian (RQ) pomnian<br />
(CM-N) Chinese-ktiinland (SC) iScrbo-Cro3tian<br />
(GR) Greek (SR) j Spanish (Castilian)<br />
( JT) Italian<br />
(SW) Swahili<br />
(ML) Ma lay<br />
(UK)* Ukranian (deleted)<br />
(PF) Persian (Farsi)<br />
: !<br />
Initial R&D work is underway; completion is expected dur'ing FY 1975.<br />
DLI R&D PRIORITY IV<br />
(AA) Africnans (NR) Nor-ncgian<br />
(PA) Danish<br />
(PC) Persian (Afghan) r/<br />
(DU) DuL ch<br />
(SL) Slownian<br />
(FJ) Finnish<br />
(Cl!) Svcd ish<br />
(HE) Hcbrcw (Hod) (UR) Urdu r/<br />
Prclir;;innry R&D xork is underway; due to the lower priority a schcdulcd cozpie:ion<br />
date is not,feasible at this time. A/ No DLPTs are,available r,nd structured<br />
intcr\:icws may bc the rnorc expedient alternative.<br />
W<br />
_<br />
“’<br />
. .<br />
33<br />
I<br />
I<br />
i<br />
! \<br />
_. r * i ’<br />
_.<br />
I<br />
d<br />
/
Introduction<br />
* Enlisted Selection and Classification <strong>Testing</strong><br />
in the US Army<br />
Nilton H. Maier<br />
paper presented<br />
Nilitary <strong>Testing</strong> <strong>Association</strong><br />
San Antonio, Texas<br />
29,oct - 2 Nov 1.973<br />
For saveral years now testing has been in a state of turmoil. Pre-<br />
viorsly * tests were quietly given ‘in schools, Industry, and the armed<br />
St’rviCCs) an,t except for those who wet-c dtrectly involved, such as rc-<br />
scarchcra and personnel managers , no one really paid too much attention<br />
to what was happening in the field. In the 1960’s, t’.ough, testing, along<br />
with so many social practices and customs, was called into question, and<br />
today tosting practices are regulated by law, the courts, and the Equal<br />
Enploymcnt Upportunities Cozrxission, %s well zs by the test developers and<br />
users . Wtcrc3s in Forrim- tines , a new test miG\t be of local interest to<br />
the Immt*d!nte usgr #, or perhaps to ths ib:ofGssion.?l community if it had some<br />
unique charxt.:ristics, today a ncv test is subject to scrutiny by a good<br />
rxlnp parties. In the midst of this concern, the new Amy Classification<br />
Eattcry, or ACB, was born, and as night be cspt*ctcd, its arrival did not<br />
go unnotict*d. In my papb:r, I will describe briefly what the new ACB is, how<br />
it is used operationally, and finally give some comments on how it has<br />
fared since its intrcdction on 1 Nay of this year.<br />
What ici the new ACB?<br />
Tltc new ACB arrived on tho scene at ahout the same time as the all<br />
� :oluntcor Army, and it is admirably suited to *he current needs for<br />
sc1cctin.r and classifying enlisted personnel. One advantage is that the<br />
34
new battery has a broader coveraKe than the old oXe. The two primary<br />
changes arc that we expanded the number of subtesty in the academic or general<br />
ability domain,<br />
1’<br />
and that we included more tests of interests and attitudes<br />
I<br />
or mca~urcs of the willingness factor. We retained the old standbys of<br />
Arithmetic Reasoning and Word Knowledge, and we .Idded’ measures of, Scikce<br />
: !<br />
Knowledge and ?fathematics Knowledge to provide more precise measurement in<br />
this +ri:ical area of general mental ability. By including these additional<br />
tests, we obtained more accurate prediction of success in the different job<br />
arcas. (Chart 1)<br />
The other major improvement is that WC added nore interest measures. The<br />
c3t licr ACE contained the Classification Inventory, which was a self-description<br />
instrument to help identify the infantryman. The original inventory grew out<br />
cf +#.a-n.r.-h hCIn..O A..%--:”<br />
__-_-_-.. --J--- mu--.. 6 the Korean War. j?e did not, hsmvtr, hz:~ zy ~th:r<br />
self-description measures ,in the ACB, untii the new’ ACE was dcvcloped. Row,<br />
i<br />
we still have a measure of cpmbat interest,, qdxred during the Vict Sam con-<br />
I<br />
flict, and three other interest measures in electronics and mechani.:; main-<br />
tenance, 3nJ an interest in attending to d’Ftai1, used for clericnl-administra-<br />
tive jobs, field artillery, and for cquipm nt<br />
b<br />
operators, such as drivers.<br />
The intcrcst ncasures add to the prcdicti!n cf success in relevant job<br />
P<br />
training, in addition to the measures of general ability and specific measures<br />
of infomtion in a field, such as electronics information and automotive<br />
informat ion.<br />
.I _.<br />
Th? information tests also tap a willingness factor. They cover<br />
matcrfal that most anyone with an interest and aptitude in th,c area can<br />
pick up. Electronics and automobiles are common to our culture, and a<br />
young persca can readily bccoae expert in these areas if he has the initin-<br />
,<br />
a’ 35
u<br />
tive to engage in relevant activities. hnd as part of our rapid social<br />
change, young women no longer are as excluded from these areas as in<br />
former times .<br />
The Trade Information subtest covers content peculiar to the skilled<br />
construction trades.<br />
.<br />
Finally, we have two perceptual tests that do not involve any reading.<br />
These arc the familiar pattern analysis.or spatial ability, and a new test,<br />
called Attention to Detail, which requires the ability to discrfninate the<br />
letter C embedded in a series of 0’s. .<br />
The tests are combined to provide nine aptitude area scores, which are<br />
used as prerequisites for job training. Tbc nine areas cover the full range<br />
of Army jobs, from combat and field artillery, through rhe a?intenanct areas -<br />
ejeCtrenir+ -=--?r;J! ,<br />
. Y-.. and mechanical L to clerical administrative end a set of<br />
jobs called skilled technical, our medics, W’s, miiitary intelligence, and<br />
t.he like. We have two other job areas that arc more unique. One is called<br />
surveillance and communications, which includes radio operators and target<br />
acquisition jobs. The other is called operiltors/food, which includes<br />
missile crownan, cooks, and drivers. (Chart 2)<br />
I should say that all of these job groupings are cnpirically based. We<br />
#did not force the jobs to come out in any special way. We csamined the pro-<br />
files of validity coefficients both visually and analytically to determine<br />
which jobs clustered together, and the grouping shown here is what emerged.<br />
Looking at the subtekts in each aptitude area, it becomes apparent<br />
that Arithmetic Seasoning, M, is omniprcscnt; it occurs in seven of the<br />
nine r?rms, plus in CT, which i~ used to determine eligibility for addi-<br />
tional testing or special programs. Another fact is that the new aptitude<br />
. . .<br />
.<br />
:<br />
35<br />
.<br />
.<br />
.-<br />
.<<br />
-.:<br />
. I
areas are more complex than the old ones. These contain three to five sub-<br />
tests each. a.p contrasted to only two in the old aptitude areas.<br />
The result of putting so nmy tests in each composite is that the<br />
new aptitude areas arc more accurate predictors of success in job training,<br />
wtiich is good, but also that the scores arc more hig!lly inter-correlated,<br />
which at first seems bad. We found, however, that the new area scores do<br />
a much better job of sorting out the manpower into appropriate job areas<br />
than did the old ones.<br />
We analyzed the new scores hy subjecting them to extensive analyses<br />
througn simulating scores on a coorpul-cr to rcprcsent random samples from the<br />
mobilization population. WC assigned the simulated “men” to job areas hascd<br />
on both the new and old aptitude? nrcas scores. We kept the quota restraints that<br />
ccrrain pcrkenragcs.had to go into each job area which reiiccred rhe real<br />
Amy quqtas at the tine. The consistent result is that the performance<br />
estimates were higher under the new set of scores than under the old. The<br />
increase in predictive accuracy more than compensatcJ for the increased<br />
intercorrelation.<br />
Another gain realized from the new aptitude area is that each of them<br />
contain tests that require the ability to read. In former days, illiterate<br />
men could qualify for Army scrvIce, and many did. Under the new system,<br />
some functional illiterates iiro still gettfng through, that is, rend below<br />
the fifth grade level, but the number has been reduced to an estimated<br />
five percent.<br />
Based on the promise of more accurate measurement of job pote,ltial,<br />
the new ACB was adopted for opcrotional USC, and it was implemented on<br />
1 Play 1973. Kow let’s see how it is being used.<br />
37 ,
operational use of the new ACB<br />
The use of the new ACB has gone through two phases already in its<br />
: .<br />
short life. In the two months of May and June, a separate AFQT was<br />
administered to all Army m~lc applicants. Beginning l,.'July, experimental<br />
, *<br />
mental standards were adopted by the Army in which AFQT was dropped as a<br />
separate test, and an AFQT score is obtained from the ACB. The experimental<br />
mental standards were also changed:.on 1 July to require one aptitude area score<br />
of 90 or better for high school graduates and two scores of 90 or better for non- .<br />
graduates, in addition to a percentile score of 10 or better on the AFQT ob-<br />
tained from the ACB.<br />
The new aptitude areas made the experimental standards possible. Since<br />
each one contains some measure of general mental ability, plus specific<br />
aptitude measures, a qualifying score of 90 is more indicative of ability to<br />
succeed. The old aptitude arcas did not nieasure enough general ccntal abiiity<br />
to use with the same degree of confidence. As of the time of writing no<br />
I<br />
decision has been made about which standards to/use as the official ones.<br />
I<br />
The hCB is no*r‘ given at tlfe time of application of enlistment, which<br />
provides the recruiter, counselor and applicant with the best possible<br />
information about the applicants mental qualifiiations and the job arcas<br />
in which he is best qualified.<br />
I<br />
The tests are adninfstcrcd at AFEES or by Eobile Examining Teams.<br />
1<br />
KETs, who take the tests to the applicant. The entire battery requires<br />
about three hours to administer, which is long!= than the previous testing<br />
f<br />
time at AFEES, but the extra tims is well spent. The new recruit actually<br />
spends less time in testing than before. He used to spend up to eight<br />
hours between AFEES and reception stations taking tests, but now he<br />
spends only the three hours at AFEES, plus another half-hour taking the<br />
38<br />
I<br />
\<br />
_
$4<br />
Auditory Perception, or Radio Code test, at AFEES if he is applying for<br />
training requiring radio code, or at reception stations, if he is not asking<br />
for a commitment for these jobs.<br />
The new ACB is also given to women applicants a: AFEES or by the<br />
Mobile Exanining Teams. An AFWST, i.e. Armed Forces Womens Selection’<br />
Test, score is also obtained from the ACB. No additional testing of wouen<br />
applicants to deternine qualification is required. The new ACB is also<br />
being readied for-use by reserve components and for retesting at posts, camps,<br />
and stations throughout the world. Soon the entire dray will be switched<br />
over to the new ACB. The Ilarines also are using the new At.3, but just as<br />
wjith the Army, the transition is not yet comple:e.<br />
Evaluation of the new ACB.<br />
Gnc consideration in evaluating the new ACE was its effect on the flow<br />
o:E marginal men, or those with low general ncntal ability. Prior to intro-<br />
duction of the new battery, our research had shovn that fewer men with low<br />
A!FQT scores would obtain qualifying nptiturlc area scores of 90 or better.<br />
There was ‘SO~C concern that the effec:s would be serious enough to affect<br />
1<br />
t:he manpower flow. Our argument was that the Amy has little difficulty<br />
in recruiting men with low mental ability, and the experience since<br />
1 May has borne us out. In July, when the ceiling on the number of men with<br />
marginal mental ability was removed, over a third of the accessions were<br />
in the marginal category,’ In August, the influx subsided, and the number<br />
was’ more acceptable at just over 20 pcrccnt. Army policy in the past few<br />
I<br />
years has been to hold the number of marginal accessions to somewhat under<br />
20 percent, and that is about the number coming in.<br />
’ .<br />
t
The quality of new accessions in the past WTS measured on the basis of<br />
AFQT scores, which provides a single index of general trainability, All<br />
men regardless of aptitude in specific job areas, were categorized on the<br />
,<br />
bask of their general trainability. Witi: the old aptitude areas, the<br />
AFQT did provrde the best measure of quality and the procedure was �<br />
appropriate. With the new aptitude tireas, categorizing men on the<br />
basis of AFQT is no longer as appropriate: Since the new scores contain<br />
measures of general ability, the ntimber of qualifying aptitude arca scores<br />
at or above specified levels provides a more a-curate description of the<br />
q;Jdity of the accessions.<br />
We have been working with Army personnel management for some time to<br />
develop a new basis for categorizing quality. Although at the time of<br />
wr:Lting there has been no final resolution about the definitions of the<br />
categories, it is clear that the intent is to broaden the basis by including<br />
all the aptitude areas in develaping an overall quality index. The basis<br />
that is finally adopted should make scnsc from all points of view;<br />
some considerations are that it should have a known relationship to the<br />
mobbilization population, it should provide useful information to’personnel<br />
management and it should not stigmatize the individual. One of the complaints<br />
about the AFQT mental categories is that the Cat IV’s, or men in the marginal<br />
category, are generally assumed to be the misfits. While there may be no<br />
way to avoid stigmatizing completing, the new system should be descrfptive<br />
of trainability in specific areas, which may help prevent a general labelling.<br />
Another area of concern about tests is that of cffcctiveness of the<br />
tests for minority group personnel. The tests should be equally effective<br />
predictors of SUCCESS for all groups, that is, blacks and whites with the<br />
,,’ .<br />
40<br />
I<br />
-.,:<br />
._<br />
:<br />
*
same test scores should have the same expected level of performance. The<br />
‘new ACB is equally valid for blacks and whites, and blacks and whites with<br />
the same test score have the same predicted level of performance in job<br />
i<br />
training courses.<br />
1<br />
From another point of view, the question is: are blacks underrepresenz-<br />
In the Army accessions as compared:!to the population as a whole? And.here<br />
lthe answer is no. In July, 1973, about one-third of the accessions were<br />
black, and in August, 20 percent. These numbers compare to about 20 percent<br />
in fiscal year. 1973, and about 13 percent in population as a whole. The new<br />
ACB thus is not excluding an undue proportion of blacks.<br />
The matter of racial bias in employment, in this case joining the Army,<br />
is of course more complicated than just n mttcr of test scores or even<br />
prediction of success from test scores. Test scores are only part of the<br />
jiob training, utilization on the job, and advanced training, would hardly<br />
I<br />
be a satisfactory solution fo:,eithcr the individual or the employer.<br />
1<br />
Another group that merit; special consideration is women applicants.<br />
The new .4CB is used to dcterninc mental qualif\cation of Amy women appli-<br />
c:ants. Thay cust qualify on the combination o Arithlctic Reasoning and<br />
L!ord Rnowlcdge subtcsts, which provide the AF&T score, and in addition<br />
3<br />
obtain at least two aptitude arca scores of 96 or better. The Army has<br />
had little difficulty in netting the quotas fTbr WAC’s. The ACB, however,<br />
was standardized on an all malt sample,<br />
//<br />
with both blacks and whites<br />
i.ncludcd, but no wozcn. The question ray nrisc whether the new ACB is fair<br />
to WOQCll. The answer is yes, to the best of our ability. Based on large<br />
samples of males and fci;ules in high schools, the population means on<br />
arithxtic and vocabulary, which constirute the new AFVST, are about the<br />
smc for both scxcs. As for the aptitude areas, both sexes rcccivc the<br />
:’ ’ 41
Ii<br />
same job training, and therefore women require the same aptitude as men.<br />
If the prerequisite for job training is 100 or average, on a particular.<br />
aptitude area, wonen as well as men need to meet this prerequisite because<br />
students of both sexes need to mdster the same knowledge and tasks. The<br />
ACB scores for women are considered to be appropriate measures of their<br />
potential to perform satisfactorily in job training.<br />
In summary, the new Amy Classification Battery has been in use for<br />
about six months. It is administered at the time of application for en-<br />
listment to prclride information about the applicant’s qualifications in<br />
the different job arca. The evidence is that it is not keeping out<br />
excessive numbers of marginal men, blacks, or women. <strong>Report</strong>s received from<br />
the field are generally favorable in that- it is an efficient and effective<br />
method to assess the potential of applicants.<br />
. . ;<br />
:i<br />
c I<br />
.
._<br />
’ .<br />
_.- _ _ .I<br />
-<br />
, -- ._*<br />
.: ,_.<br />
43
I:* .<br />
Test.' .'.<br />
Ce~wxal 'Ability Test;<br />
-. . NE? APTITUDE AkEh COWOSITES ..I - .<br />
. . . :s<br />
Arithmetic Reasoning<br />
-* (AR)<br />
IGeneral InCornation @I)<br />
� klathcnatics Kno:Jlcdge (:Il:)<br />
'k'ord KnowlcdzC _ ow<br />
-.-. ,,<br />
Scfcncc Iktovledge<br />
.<br />
cm<br />
&chan$cal Ability Test& _'<br />
. m<br />
Trtde Inforclation W-1<br />
Electronics Infornlatj.on (EI)<br />
Ncchanical Comprehension wj<br />
Automotive Info,?;lrtion (AI)<br />
Ferccp:ual Ability<br />
. Patter:1 Analysis<br />
; httcntion to Dstail<br />
Auditory Percept&o?<br />
. . .<br />
. Self DcscrLption<br />
:: . :<br />
Conlbnt Scale<br />
Attcntivencks Scale<br />
):lcctronics ScalC<br />
tlaintenancc ScalC<br />
. .<br />
. ‘.<br />
.<br />
. . .<br />
. .<br />
. . -<br />
c<br />
.<br />
. .<br />
7. -<br />
AR<br />
TI<br />
PA<br />
AD<br />
cc<br />
. .<br />
. .<br />
. . .<br />
I<br />
.<br />
Aptitude Arca Cozpositcs<br />
.:<br />
Fh EL<br />
SC<br />
-<br />
OF<br />
-<br />
XT1<br />
- -<br />
CY<br />
-<br />
AR<br />
Gi<br />
$lK<br />
. .<br />
EI<br />
CA<br />
. . . J /<br />
I1R<br />
. .<br />
I.7<br />
El<br />
NC<br />
I<br />
I<br />
I<br />
I<br />
CE<br />
cx<br />
is<br />
AR<br />
,<br />
I<br />
WK<br />
NC<br />
PA<br />
hk<br />
tit:<br />
Tl<br />
El<br />
Al<br />
Ct<br />
I<br />
AU<br />
SK<br />
t:c<br />
AL<br />
:.<br />
.<br />
ST<br />
-<br />
AK<br />
t1Kt<br />
SK<br />
.<br />
*. .-<br />
*..<br />
GT-<br />
-<br />
A!:<br />
.<br />
. . :<br />
. . . . -. .<br />
. ‘.!’<br />
:!<br />
: .’
THE DEVELOPMEXT OF THE ENGLISH COMI'BEHENSION<br />
LEVEL SCBEENLNG TESTS FOE FOREIGN STUDENTS<br />
BY: Cortez Parks<br />
The English Comprehension Level (ECL) testing system is the !<br />
\<br />
primary quality control used in the English Language Training<br />
Program (ELTP) conducted by the Defense Language Irstitute, English<br />
Language Branch (DLIEL) at Lackland Air Force Base and at numerous<br />
overseas locations.<br />
It is a unified testing system in that all test forms are<br />
origi.lated and controlled by one agency. The construction,<br />
validation, distribution, administration., and control of the various<br />
types of tests prescribed by DLI are the responsibility of the Tests<br />
and Measurement Branch, Development Division, English 1 rnguage<br />
Branch.<br />
Among the tests produced by the Tests and Measurement Branch<br />
are the English Comprehension Level Screening Tests, or "EC%" tests<br />
as we refer to them, which are the subject of this paper. T'ne ECL<br />
Screening Tests consist of general English tests, usually called ECL<br />
tests and the Specialized English Terminology (SET) tests.<br />
The ECL test is designed to measure an individual's English<br />
listening and reading comprehension and to ascertain his capability<br />
to acquire knowledge of and to function effectively in English<br />
language training and work situations.<br />
The ECL test does 'not measure speaking or writing ability.<br />
i<br />
These skills are measured subjectively by the students' instructors.<br />
;<br />
L /; i: .<br />
45<br />
:: L’.<br />
.<br />
’ .<br />
--- --. . . . . ..-I.. i . I . . .._ .. . . . c<br />
*<br />
A :<br />
:<br />
. I/i ;<br />
.<br />
'.
This is done by raring the speech or writing production by the use<br />
of a speaking and writing proficiency .?evel code key. Correlation<br />
studies made at DLIEL of the instructors evaluation compared to the<br />
E:CL indicate that the subjective evaluation of thece skills has a<br />
high correlation with the ECL scores.<br />
ECL tests, as well as the SET tests, have been correlated with<br />
language performance of native speakers of English in various<br />
specialized fields (Electronics, Fiaintenance, - - - - etc.).<br />
The minimum ECL scores and cutoff scores of the SET tests have<br />
been established for different levels of training in the various<br />
specialties of the Army, Navy, and Air Force. These scores represent<br />
,the minimum English comprehension level necessary for foreign students<br />
to successfully cope with English instruction in tasks involving<br />
,varying degrees of difficulty and/or danger.<br />
An individual cannot normaily be expected to perform satisfactorily<br />
and efficiently when exposed to training in those fields with an ECL<br />
score below the minimum requirements. .<br />
A 100 ECL score indicates that the foreign students can use<br />
standard training materials without any difficulty.<br />
A O-25 ECL includes beginning language students with no preViOlJs<br />
-<br />
English language background or those with a weak English background.<br />
A O-39 ECL indicates that the student is at the elementary level.<br />
Language instruction given at -this stage consists principally of<br />
carefully selected basic vocabulary, sentence pattern and related<br />
pronunciation drills. No formal specialized instructions is attempted.<br />
* .’<br />
46<br />
.,A---- ._<br />
.’<br />
\<br />
,<br />
h<br />
.:.<br />
. -‘I<br />
\ ,‘;, ‘-,<br />
t
ILevel.<br />
I I<br />
A 40-59 ECL indicates that the student is at the intermediate<br />
A 60-69 ECL is the qualification level for direct entry into<br />
'CONUS or 3rd country apprentice specialized training. 1<br />
A 70-79 ECL is the required qualification level for direct entry<br />
into C0hT.S or 3rd country in most basic courses (e.g. primary pilot<br />
training, etc.).<br />
80 and above is the qualification level for professional career<br />
and advanced courses (C & GS schools, sur?eons and advanced medical<br />
courses, pilot transition, advanced flying safety officers. demolitions,<br />
control tower operators, etc.).<br />
ECL levels are not percentage scores but are scores converted to<br />
established qualification levels; i.e. an individual with a in ECL<br />
knows much more than twice the "amount" of English t!lan one with a 35<br />
ECL, and the progressive proportions between 45 and 90 ECL are not the<br />
I<br />
same as those between 35 and?O. The ECL spreid "value" is nc: linear,<br />
!<br />
but is more like a modified logarithmic curve.<br />
The language proficiency levels required -or entry into the<br />
various training courses are set by the servic Is<br />
/<br />
responsible for<br />
operating the training facilities. Applicant; failing to meet the<br />
I<br />
language proficjency level requirements for direct entry are programmed<br />
through the English Language Branch for additional language training.<br />
//<br />
In early 1962, a number of the service'schools reported that a<br />
signcficant number of students were entering trai..inp courses who were<br />
meeting the ECL requirements but who were deficient in the specialized<br />
terminology peculiar to their specialties. This was especially true in<br />
.A- __-.<br />
47<br />
_.<br />
.<br />
: . ..<br />
. .J<br />
$.<br />
b k‘2<br />
p ”<br />
. .<br />
.I
the areas of electronics, mintenance, and weather. The service<br />
schools were finding it necessary to give these students extra-<br />
curricular instruction in technical terminology. At a Tri-Service ECL<br />
Test Conference at Air Training Comand (ATC) in lS64, a decision was<br />
made to add SET tests to the ECL screening tests.<br />
SET trsts were developed in five specialized ares; electronics,<br />
maintenance, supply, weather, and wdical. Thus, Series 6500 was the<br />
first series of the ECL screening tests that consisted of ECL and SET<br />
tests.<br />
By use of the SET tests, Xilitary Assistance and Advisory Groups<br />
(ELUCs) were alerted to the necessity of prograuming students through<br />
the DLIEL Specialized phase prior to their enrollment in COWS training<br />
courses.<br />
As the result of a ianguage training sumey consucted by DLI upon<br />
its conception in 1966, it was decided to add General Amy and General<br />
Na,cy terlnfnolo,y to the screening tests. lkese txo additional tests<br />
were incorporated into t3e 6700 Series screening tests. Also, several<br />
of the technical training schwls were requested to submit lists of<br />
technical terms along with basic course materials to DLTEL for use in<br />
the development of SET tests. These materials are updated periodically.<br />
The 7200 Series SET tests now in use and the 7400 Series which<br />
will replace the 7200 Series in July 1974, consist of two forms each of<br />
Electronics, Maintenance, Supply, Weather, Xedical Professional, Xedical<br />
Service, General Amy, and General Sary tests that mssure the<br />
comprehension of technical terminology considered essential by the<br />
respective technical training'schwls.<br />
48
The ECL test consists of 120 questions, 6Oi of which are on tape<br />
and test the aural comprehension of ti,e student. 40% of the questions<br />
i<br />
test reading comprehension. 40% of all objectives tested are taken<br />
1<br />
from the ALC Elementary and 60% from the lnternediate texts. Items<br />
are presented in the form of questions, s:atements, and dialog in the<br />
,<br />
:!<br />
listening part of the test and in the form of incomplete statements,<br />
underlined objectives, vord order, and complicated sen:enccs in the<br />
written part. 75% of the questions consist of vocabulary and idioms<br />
and 25% test the students’ abilities to select the proper structure.<br />
All items are of the multiple choice type with four choices of answers,<br />
only one of vhich is the correct ansbvr. The standards for. the<br />
construction of items are extremely r&d.<br />
The resulting product is the service trial edition of the<br />
1<br />
screening test consisting of 12 forms of ECL tests a-f two form5 cf<br />
each of the SET tests*<br />
Validation of the ECL and SET tests are accomplished by pretesting<br />
the service trial cditian k-fth the forrign students at the various<br />
technical school::. Since the students at the technical schools are<br />
all in the upper quadrant of language Iproficiency,<br />
the ECL tests are<br />
(<br />
pretested also among students in the cower quadrant at overseas schools.<br />
The number of cases used for experimental testinS depends .u,>on<br />
‘I<br />
the type of statistical analysis re&ircd. For conuuL:ing item _<br />
anal,s&, at least 100 cases were used. Fcr computing correlation or<br />
for norming, at least 300 cases were obtained.<br />
49<br />
i
A great amount of effort is being expended in the Javelopment<br />
of the ECL screening tests in order to obtain the highest reliability<br />
and validity for each form of the tests. For test reliability, both<br />
the internal item consistency and the alternate-form reliability are<br />
carefully checked. For test validity, we check the content validity,<br />
construct validity, and predictive validity. The reliability index<br />
and the construct validity coefficient of each of the different forms<br />
have been kept above .90. The coefficients of the predictive validity<br />
have shown above .65. X11 of these cutoffs are considered high<br />
st .mdards in test validation. For an example of statistical data,<br />
please refer to handout, Table 1.<br />
We have also correlated the ECL tests against the Test of English<br />
As X Foreign Language (TOEFL) tests, the University of Michigan<br />
.<br />
language proficiency tests, and the American University language tests.<br />
Pretesting has also been conducted at San Francisco State College, the<br />
University of Texas, Tesas AfJl, and the University of Minnesota. The<br />
,results were around .80, indicating a high correlation with their<br />
criterion tests of English proficiency.<br />
The SET tests do net use converted standard scores as do the ECL<br />
tests. They arc designed to use percentage scores with pass/fail<br />
cutoff scores. For the 6500, 6700, 6900, 7200, and 7400 SET tests, we<br />
have used minus one standard deviation as the cutoff point since these ’<br />
are proficiency type tests.
- -...<br />
!krxfrm::i pn~sihlc score<br />
TABLE I<br />
hEL-69 ^. ?!\-69 - - SU-69 - - WX-69 - - ME-69 AR-69 N&69<br />
- - B- A B A B A B - A - B - A BAB<br />
100 lfJ0 1 N 120 100 100 100 100 120 120 100 100 100 100<br />
?kran Score 70.4 70.9 81.2 77.0 54.1 53.6 62.8 63.3 87.3 87.8 66.5 67.0 63.5 63.3<br />
1.‘ Standard Deviation 15.8 15.3 25.6 24.2 18.1 17.8 20.3 20.1 24.1 24.9 15.0 13.7 15.5 15.3<br />
I<br />
i ..,,<br />
, .<br />
Ncnn Difficulty Index .65 .67<br />
Leon Discrimination Index .43 '.37<br />
Reliability Index for<br />
lnternal Consistency<br />
(Kudcr-Richardson<br />
Formula 21)<br />
.93 .92<br />
Alternate-form Reliability AxB<br />
Cocfficfept (Pearson<br />
Product-Xomcnt Correla- .87<br />
t ion Formula)<br />
,’ ’<br />
1.’ Validity Coefficient ,79 .79<br />
of Correlation between<br />
Specialized Terminology<br />
Tests and ECL Tests.<br />
.62 .59 .57 .57 .65 .63 .64 065 .66 .66 .62 .63<br />
.49 .48 .47 .47 .49 .50 .43 $44 .37 .32 .37 .36<br />
.97 .96 .93 .93 .95 .95 .97 .97 .91 .89 .91 .90<br />
AXB ixB<br />
.96 .89<br />
AXB<br />
.95<br />
.88 .86 .77'. .75 .86 .86 .76 .76 ,70 .72 .61 .69<br />
Axi3<br />
.94<br />
AXB<br />
.88<br />
AXB<br />
.92<br />
_ ”<br />
,_,- ’<br />
,.-:-
;Lv EXPERIMEXTAL, iWLTIMEDIr\ CAREER DE\‘ELOPMXT<br />
CoutiSE FOR SEW WNTAL STANDARDS AIRMN<br />
George P. Scharf<br />
Air Trnining Commnnd - USAFSMS, Chanutb<br />
This paper is a report on an experiment to determine if<br />
modifying the Fire Protection Specialist Career Development<br />
Course (CK) could improve the CDC as a t:aining device for<br />
low ment31 aptitude airmen. ?louL-*cations to the regular<br />
CDC included simplifying the k-ritten materials and adding<br />
1s hours of nudio supplementation. The experiment used<br />
three groups of Sew Yenta1 St3ndards (WS) Project 100,000<br />
3irmcn enrolled in three versions of the CDC. The control<br />
group took the regular fire protection specislist CDC. T!IC<br />
first csperiment31 group took the modified CPC with nudio<br />
suppicmentntion. The second expc ‘imental group took the<br />
modifieci CDC without any audio supplement3tion. Criterion<br />
for effectiveness was the final grade performance of the<br />
personnel in the three grouns. Results indicated both<br />
experimental groups had mean final gr3Je scores significantly<br />
brttcr than the control group. flok’ever, there ~3s no<br />
significsrit difference between the mean fincti grade scores<br />
of the tuo experimental groups. Mean course completion t imcs<br />
for the three groups were within 3 week of one another. In<br />
this csperiment, the audio-supplemented CDC ~3s not 3 costeffective<br />
method of course presentation.<br />
. In recent years the federnl government /has been quite conccrncd with<br />
improving the utilization gf the nation’s manpoouer resources. In the<br />
late 60s and until 1971 there was considerable pressure on the milit:lry<br />
services to accept and train personnel with’ lox mental abilities. OIlC<br />
such progmm, Project 100,000, begnn in 1966. This program cnllcd for<br />
the milirary scrviccs to accept up to 100,OpO cnlistcd pcrsonncl scoring<br />
between the 10th and 30ti; ptr:cntilcs on the ArmAt Forces Qunlific3tion<br />
Tcs t (XFQT) . These 3rc pcrso;ls in the Cate’ory I\’ ment31 group,<br />
f<br />
As A p3rt of Project lOO,OO!? zithln the Air Force, ;I number of cnreer<br />
fields were sclccted for impiemcntin g the &se of large numbers of nu~rginal<br />
Iptitudc, Sew Hental Standsrds (SYS) personnel. hbilc US airmen were a<br />
psrt of Project 100,000, not all Project 100,000 rrirmcn wore X\IS people.<br />
The definition of Sew Mental Stnndnrds is, Vhosc who have scores<br />
in the 1016~ h3lf of the ?lo,.:al Croup I\ One of the career fields<br />
sclcctcd to utilize A>15 3irmen ~3s thnt Protection Specinlist, _<br />
XFSC 571so.<br />
.\?lS personnel were assigned to the Fire Protection Spcciaiist Course<br />
nt Chnnutr’ directly from Lachland. However, to merit this assignment, the<br />
52
student had to have (or achieve before leaving Lackland) a reacting level<br />
of at least the sixth grade. hhen the ?;?!S airmen entered training,. it<br />
was found that most of these personnel with low mental abilities and lou<br />
reading lcvcls could complete the Fire Protection Course if given adequate<br />
rcccdial instruction, counseling, and personal guiJance. On the job, however,<br />
where structured remedial and personal help were not as readily available,<br />
lox reading ability caused great difficulty for thcsc airmen in studying<br />
and completing their Fire Pro,cction Specialist Cnrtcr Development Course<br />
(‘SC). As a result , AK Headquarters dircstcd that an experiment be set<br />
up to detcrminc if simplifying the CDC and adding nuJio sus?lcmcntation<br />
would ensblc studcn:s to complete their CDC fnstcr and with better grades.<br />
The cspcrinental design finally selected call4 far comparing three<br />
couivalent groups of 915 airmen who had successfully graduated from t?e<br />
h:,Sic Fire Protection Specialist Coursr at the School of Applied Aerospace<br />
Sciences, Chsnutc. The AFQT scores for the SVS cspcrimental subjects<br />
ranged from 20 down to 10. Upon reaching the t’icii and applying for their<br />
CDCs, these airmen were enrolled in one of three different versions of the<br />
Fire Protection Specialist CD,. The students in all three groups took the<br />
same prc-test nnJ final or p>st-test. The three UK versions covered the<br />
same basic information, but their modes of prcscntxtion differ4 as foIlok’s:<br />
CDC 57lSlA. Airmen 61 took this course were the Control group. This<br />
is the convcrtionnl Fire Protection S?rcislist CDC.<br />
cnc 5711-V. This was 331 experimental, JJlUltiJ2tdiil CDC whi:h USC~ short<br />
sentences, large, single-~olumn-pcr-~3gc print, contnincd more pictorial<br />
natcrials th:ln the regular CDC, and used cassette tape rccorctings to<br />
summarize, cnphasi zc , and express in different terms the information read<br />
by the stulicnt . So new or different information xas included in the<br />
recordings. Khilc dctailcit instructions on usrng the tape player vith the<br />
b-ritten material arc a part of the CL’C test, the procciiurc was sorlewhat<br />
as fOllOh’S:<br />
11) After reading an instruction sheet on the tape player and<br />
directions at the start of the first CIK \‘olumc, the stuJcn: studied the<br />
first part of his CDC. After a few paragraphs, he was instructcJ to put<br />
a particular cassette into the tape plnycr and turn it on.<br />
(2) The tape recording rcfercnced the pnswgcs :hc student had<br />
just wad. Speaking in turn, a group of up to four male voices then<br />
di scusscd , summnriz~:d, restated, or reviewed the int’ormatinn in the<br />
referenced paragraphs. r The student skis then direct& hy a rccorJed voice<br />
to turn the Player off, i!r then returned to the b-rittcn text until hc<br />
reached the printed instruction to again turn on the tnpr player. This<br />
procedure was t‘olloKed throughout the entire five voiumcs of the CDC for<br />
a total of approximately 1s hours of nuctio rcvicx of whnt the student<br />
h.-tJ rend.<br />
\<br />
.<br />
_.<br />
t<br />
53<br />
. ,_..
(3) \~olumc introductions, chapter introductions, and chapter<br />
sunnaJries were recorded with a female voice to give more variety to the<br />
tapes.<br />
CK 5710u)B. This was the same, less verbal, more pictorial<br />
CDC S?%iout the tape recordin@ and with all references to using<br />
a tnpc player removed from the written text.<br />
Procedures<br />
Starting in August 1971, a graduation roster of each class in the<br />
F:ire Protection Specialist Course \r’as furnished :hc Training Research<br />
Applications Branch. The first entries into the CDC experimental program<br />
g:l:3Jwtcd from the Chanutc course on 7 Septczber 1971. Khen an WS airman<br />
graduated, his name and social security account number were furnished to<br />
thz Estcnsion Course Institute, Edwational Systems Branch (ECI/EDS\‘),<br />
at Gunt er AFB in Alabama. These names were then put on an identifier list<br />
a’t EC1 as eligible for the CDC experimental program,<br />
a. The number and flow of‘ MIS personnel available for this<br />
experiment never became as great as the originally projected 16 per month.<br />
One rc,rson for this was that Project 100,000 unofiicially decreased to<br />
Project 50,000 some months before the experiment began. As a result, the<br />
number of SYS students entering into the Fir-c Protection Specialist Ccurse<br />
Jccrcnsctl by more than half. Thus, by the tine the CDC course materials<br />
wrc\ prcP:ircJ, there was a shortage of eligible participants for the<br />
cspcrimcnt.<br />
b. Effective 1 April 1972, Change 4 to AI:N 33-3, Enlistment in the<br />
Regular Air Force, cut off the MIS student flak- entirely. Under this change,<br />
Xlr Force enlistecs must have an AFQT of at icast 21, and if the AFQT is<br />
bctwecn 21 and 30 the enlistce must be at least a high school graduate.<br />
This change in enlistment requirements stopped entry into the Air Force<br />
(and thus eventually into this csperimentl of AX airmen. As a result,<br />
the number of casts analyzed in the study is as follows: regular CDC --<br />
24, experimental group 1 using the tape recorder -- 25, experimental group 2<br />
u.sing only the i-written course materials -- 19.<br />
:\s applications for the Fire Protection Spc
The measures of<br />
an analysis of their<br />
control CDC group.<br />
,<br />
effectiveness ci the two experimental CDC groups was<br />
final course grade performance in comparison to the<br />
Xl1 caiculations were performed using statistical packages available<br />
in the FIAT0 I\’ computer-based teaching system. There are presently four<br />
PLATO IV terminals at Chanute connected to a central computer located at<br />
the Urbana campus of the University of Illinois. The mathematical formulas<br />
nrccssary for any analysis, including those involving unequal n’s, are<br />
included in the analysis programs used. The person raking the analysis is<br />
required only to select the type of analysis desired and input the raw data.<br />
:\nsxers appear on the screen immediately. Thus, it was possible to examine<br />
a great amount of raw data concerning the subjects of the study and receive<br />
back an almost instantaneous readout of the analysis selected.<br />
Overall results obtained from this study are summari-zed in Table 1.<br />
SOT . mean<br />
AFQT standard deviation<br />
w<br />
Resident course grade ’<br />
mean<br />
Resident course grade<br />
star&r2 deviation<br />
Prc-test grade mean<br />
Pre-test grade standard<br />
deviation<br />
Post-test grade mean<br />
Post-test standard<br />
deviation<br />
Set -mean gr3de score<br />
increase<br />
Set-mean grndc score<br />
increase standard<br />
deviation<br />
ComPletion time mean<br />
CornPlction time standard<br />
deviation<br />
Table 1<br />
CDC<br />
5715lA<br />
n=24<br />
15.33<br />
3.41<br />
’ S3.75<br />
3.53<br />
56.56<br />
10.5s<br />
66.75<br />
7.7i<br />
10.17<br />
9.59<br />
I<br />
/<br />
j<br />
1<br />
I<br />
/<br />
!<br />
i<br />
/<br />
i<br />
CDC<br />
37100<br />
n=25<br />
15.0s<br />
3.55<br />
s3.ss<br />
3.69<br />
57.40<br />
9.57<br />
72.80<br />
10.73<br />
15.40<br />
8.72<br />
205.00 days 211.50 days<br />
, .’<br />
76.35 days 50.26 days<br />
55<br />
I<br />
\<br />
CDC<br />
571008<br />
n=19<br />
15.05<br />
5.10<br />
64.61<br />
s.b,s<br />
61.95<br />
10.7s<br />
76.00<br />
11.07<br />
14.05 -<br />
s.9s<br />
205.3’ days<br />
31. b.1 days<br />
. . .<br />
‘<<br />
: !<br />
‘8 :-
*ro establish that the three groups were equivalent in ability, a<br />
one-way analysis of variance ~3s conducted on the means of the students’<br />
XFQTs, final grades from the resident Fire Protection Specialist Course,<br />
and their CDC pre-test scores. Table 2 summarizes the one-say analysis<br />
of variance on each factor among the three groups.<br />
\<br />
Table 2<br />
- -<br />
Source of Variation F - d.f. E<br />
AFQ’P 0.045 2,65 = 0.9558<br />
Resident Course<br />
Final Grade 0.341 2,65 = 0.7122<br />
CDC fre- test<br />
Cradc 1.628 2,65 = 0.2042<br />
As cnr. be seen, the F ratios for each of these three student variables<br />
were all non-significant, thus indicating comparable groups.<br />
In order to test the cffcctivcncss of the instructional methods, a<br />
one-k:3y annlysis of variance ws conducrcd on,thc zeans of the post-test<br />
CLN: (final grxic) scores achieved by the three groups. Table 3 presents<br />
this analysis of variance.<br />
‘fable 9<br />
Souric i) i VdriJtion F - d.f. e<br />
Post-test score 5.966 -1,65 = 0.0098<br />
Inspection of ‘I’ablc 3 shoxs that the post-test score F ratio was 4.966;<br />
this is a significan? vnluc indicating the post-test performance of the<br />
?hree groups was not comparable.<br />
To compare the differences of the three groups on the post-test<br />
pcrformanse factof, a Series of t tests were run. Their results are<br />
sumarized in Table 4.<br />
56<br />
._<br />
..- _.
.<br />
TL<br />
Mean post-test score<br />
of Es Gp 1 (revised<br />
text, audio supplement)<br />
I<br />
72.80<br />
Mean post-test score<br />
of Lx Cp 2 (revised<br />
text only)<br />
76.00<br />
Mean post-test score<br />
of Es Gp 2 (revised<br />
text only)<br />
Table 4<br />
T3<br />
Mean post-test score<br />
of control group<br />
t d.f. E<br />
66.75 2.258 47.0 .06<br />
Yean post-test score<br />
of control group<br />
66.75 3.227 41.0 .05<br />
Mean post-test score<br />
of Ex Gp 1 (revised<br />
text, audio supplcrwnt)<br />
76.00 72.80 .967 42.0 .44<br />
The inferences from Table J are that both the audio supplemented CDC<br />
and the revised text only CDC yielded significantly higher grades than the<br />
regular CDC. However, there was no significant difference in the final<br />
course grades of the two experimental grwps.<br />
To establish the equivalency of the compietion times, 'a one-way<br />
analysis of variance was made on their means. This analysis is shohn in<br />
Table 5.<br />
Table 5<br />
Source of Vciriat ion F d.f. IT<br />
COl:pietion time of<br />
CDC in days 0.064 2,65 = .9,383<br />
The indication of this analysis is that the CDC completion times for<br />
the three groups were not significantly different.<br />
57
Discussion<br />
In this study, 20th experiment31 CDC groups had sta;istically significant<br />
better learning scores thnn the regular CDC students. Hcwever, there was<br />
no sisnificnnt difference in the learning scores of the Go experimental<br />
groups. Khilt not statistically sigri ficnnt , the experimental group without<br />
tlw 3udiO supplementation hnd a mean final grade that ~3,s 3.1 points<br />
higher thnn the cxperimentnl group using audio supplementation. At the<br />
s3me t imc, the net-mean sr3dc score increase of thcsc tuo groups was 1.78<br />
Iiighcr ior those using 3uJio supplementation. Both experimental groups<br />
.iiad greater net-ncan gr3Jc score inc,l:eases than the contrcl group.<br />
l\%ilc costs of 3 program such 3s this :irc a rather elusive thinE and<br />
hnrd to pinpoint , it xould nppear fhnt the product ion, impleinent3tion, and<br />
ndininistrn:;on of t!lc 3uJio-supplezentcd CIK ~3s bctvecn three 3nd four<br />
tines the cost of the rcSuI3r CDC. This factor rcanins about the same<br />
wfilen proj cctcd over a much 13rger number ‘01‘ students thnn participntcd<br />
in this experiment. For 311 three CDCs the production Of the hlittCl1<br />
n3tcrinls is cquivnlcnt. In addition, the larger the printing of 3 CDC,<br />
the less cost per student. ilorseuer the premise that the larger the audio<br />
grcsup the Jcss the cost per studcnt’does not appc3r val id, :ilii:c tlicrc<br />
ni:ght bc some slight savings in buying 3 lnrge number of c3ssettcs :InJ<br />
t3:pt pl:l\~crs ( the loss rntc plus wintcnancc nnd rcplaccmcnt COL‘:~ of<br />
these items wculd nullify nny projected rcductioirs in cast per student<br />
lwscd on a lnrge number of users. Consequently, it does not 3ppcar cost<br />
effective I‘or widespread :40t;tion of an audio-supplemented CDC program<br />
\
However, the mean time for those using the audio tapes was a week longer<br />
than those using the other versions of the CDC.<br />
times for the three groups was not significant.<br />
The difference in completion<br />
Crnc lusions<br />
As a result of the analysis uf the data evolved from this study, the<br />
following conclusions are hereby stated.<br />
The the experimental groups had better final CDC grades than did the<br />
control group.<br />
The differences in grades of the two experimental groups xas not<br />
statistically significant. The audio-supplemented experimental group<br />
had a net-mean increase of their final .grade over their pre-test grade<br />
1.78 pr,ints more than the non-audio supplemented group; but the non-audiosupplemented<br />
group had a mean final grade 3.2 points higher than the<br />
audio-supplemented group.<br />
Any higher grades of the experimental grouns can be attributed to<br />
t:he revised text as much as to any<br />
groups were shah-n as equivalent in<br />
Yean completion tines for the<br />
another.<br />
&dio supplementation was not<br />
in this experiment.<br />
Ref crenccs<br />
audio supplementation as all three<br />
mental ability and pre-course knowledge.<br />
three groups were within a week of one<br />
a cost-cffcctive means of instruction<br />
Effectiveness of Exrerimcntal Training Naterials for LCW Ability Airmen,<br />
Kaynce S. Sellnan, Captain, USAF, AFHRL-‘LX-70-10, June 1970.<br />
59<br />
.-
Optimal Utilization of On-the-Job-Training<br />
and <strong>Technical</strong> Training School<br />
Captain A1q.n D. Dunham<br />
This paper discusses work planned and completed during the AF<br />
Human Resources Laboratory’s effort to develop techniques for<br />
optimizing the use of On-the-*Job Training (OJT) and <strong>Technical</strong><br />
‘l’rai.ning School (TTS) as methods of initial upgrading for non-prior’<br />
service personnel.<br />
1. INTRODUCTION<br />
The author’s previous MTA paper (1972) discussed criteria relevant<br />
to selecting optimal OJT/‘TTS mises and then described the results of<br />
wor:k which established the feasibility of obtaining OJT cost data.<br />
Significant improvements in the O.JT costing methodolog have been<br />
obtained in the year since the first report and progress has been made<br />
in the application of a measure of one other relevant criterion, the<br />
quality of the training methods. This progress is discussed in the<br />
following text.<br />
II. ‘REFTSEMENT OF CJT CCSTTSG METlIODOLOGY<br />
The original OJT costing methodologl identified the following cost<br />
factors:<br />
. ”<br />
:<br />
Table 1<br />
@JT Cost Factors<br />
Student Time<br />
Instructor Time<br />
Delayed Entry into Training<br />
Records Managcmcnt<br />
Remedial Trainins<br />
Equipment and Materiais<br />
60
Data concerning these cost factors were collected by strvey for the<br />
Communications Center Operations specialty for OJT to the 3 (semiskilled)<br />
level for non-prior service airmen. These surveys asked<br />
supervisors in this job specialty to provide detailed ?aea on several<br />
items, including time spent learning a number of skiIlk specific to<br />
Communications Center Operations. The Supervisor2 had a wide range<br />
of experience with OJT. The resulting OJT estimate was considerably<br />
less than the cost of TTS for the corresl,onding training co*-lrse.<br />
The most important aspect of this research was that it established<br />
the feasibility of obtaining cost estimates fQr USAF O.JT through the<br />
use of a survey technique.<br />
One undesirable result was the high response variance encountered<br />
for many of the crucial questions. This resulted in relatively wide<br />
confidence limits for tht! WT cost estimate? but not wide enough to<br />
cast doubt on the cost comparison.<br />
Another undesirable characteristic<br />
was that a large section of the survey instrument was specific to<br />
Communications Ccrlter Operations. A survey instrument independent<br />
of the job performed would reqllire less resources for survey administration<br />
and data file maintenance.<br />
Kf. THE DEVELOPhlENT AND EVALUATI@N OF ALTERNATIVE<br />
METHODOLOGIES FOR ESTIMATING THE COST OF AIR E’ORCE OJT<br />
In June of 1972 Manpower 2nd Personnel Sy&.tems Division of the AF<br />
Human Resources Laboratory let contract PF41609-72-C-0048, designed<br />
to improve upon the original OJT costing methodology. Dcring the first<br />
phase of this contract the contractor used the original survey approach<br />
and two alternative survey techniques to simullaneously collect cost data<br />
concerning OJT to the 3-level for USAF Admi istrative Specialists.<br />
One of the two new alternative survey techgiques<br />
‘i<br />
asked supervisors<br />
to estimate information for e!even variables
in the instrument, but most of the redundant questions were asked only<br />
once to minimize the length of the survey.<br />
Of the 295 surveys sent to supervisors at 88 Air Force bases, 20’7<br />
were returned. All responses were manually e.&ted and keypunched<br />
for transferal of the survey data to magnetic tape.<br />
The questionnaire’s length might have been responsible for a reduced<br />
response rate toward the end of.the instrument, but response rates for<br />
all three survey approaches were sufficient to enable co,nparison of the<br />
alternative txchniques.<br />
A detailed report of the survey data and an in-depth discussion of<br />
their implications will be forthcoming soon in an AF Human Resources<br />
Laboratory technical report. For brevity and to minimize duplication<br />
Table 2 describes item content and mean responses without discussion<br />
of their implications. The reader is cautioned against ixferring strong<br />
conclusions from this limited display of data.<br />
.4 cost estimate for each OJT cost factor is exhibited in Table 3. The<br />
cost models used to compute these estimates were selected from several<br />
cost models based on various combinations of the three survey approaches.<br />
Again, a more rigorous discussion will soon appear in AFHRL publication.<br />
The OTT cost per trainee of S1545 can be %ompa.red to the marginal cost<br />
per trainee o course 3Af3R 70230, S2271. There are criteria other<br />
than costs which are equally inrportant, viz. , training capacities ar.d<br />
quality of the training. Thus, conclusions regarding optimal OJT/‘TTS<br />
mists for the Administrative Specialty based only on data in this report<br />
would be reached without considering other important criteria. These<br />
criteria are discussed l~~ a <strong>Technical</strong> <strong>Report</strong> in review at AFHRL. ’<br />
Z The cost of Tech schoorwas obtained from FUSD Corporation data.<br />
- !<br />
. -._.<br />
62<br />
. _<br />
, _- .- _..<br />
.<br />
._. .~. _. .*. - -. A
L .._.<br />
Table 2<br />
Mean Responses for Admin 5pecialist OJT Survey<br />
Responses Based on Supervisors’ Total .Expericnce<br />
F<br />
Item Content f Mean Response<br />
Persons in Section<br />
Days Delay in start of &level O.JT<br />
Days Delay in start of &level OJT<br />
Weeks to 3-level Proficiency<br />
Weeks from Proficiency to ,Ictual<br />
Aw.a.rd of 3-level<br />
I<br />
7.27<br />
8.60<br />
8.44<br />
13. ?3<br />
Sumber of S-level Trainees<br />
/<br />
3.31<br />
1.34<br />
Sumber of 5-level Trainees<br />
1.06<br />
Aci&tional Training Capacity<br />
1.113<br />
Operate with fewer personnel if no OJT<br />
297<br />
Are Trainees Productive?<br />
82% ;<br />
0,JT Superior to TTS Grad<br />
22:<br />
TTS Superior to O.JT<br />
32°C 3<br />
So Differ ence between OJT and TTS Grads.<br />
465. 3<br />
Percent Fai!ing Apprentice Knowledge Test<br />
25.64<br />
Weeks Remedial Training<br />
Trainee Remedial Hours/Week<br />
/<br />
3.60<br />
9.07<br />
Instructor Remedial Hours/‘Wcek I<br />
5.28<br />
Training Accord Keeping Hours/Week 1<br />
1.94<br />
Ski 11 Area4<br />
.Career<br />
Security<br />
Supervision<br />
�<br />
Wks. to Prof.<br />
2.86<br />
2.G5<br />
4.80<br />
Instr?ctor<br />
1<br />
Hrs/Wk. Trainee Hrs/Wk.<br />
-<br />
7.78<br />
3.89<br />
2.94 3. 3.31 3.59 1. 78 22<br />
6.90<br />
Equipment 3.78<br />
9.14<br />
Publications 4.51<br />
7 . 8 3<br />
Forms 3.64<br />
I 2.65<br />
5.43<br />
Communications 4.99<br />
3.27<br />
8.28<br />
Documenta!ion 5.02<br />
6 . 9 2<br />
Library 2.70<br />
1.16<br />
4.02<br />
Postal 3.10<br />
2.07<br />
5.04-.<br />
3 Per cent ot total valid responses that responded affirmatively.<br />
4 Corresponds to the Job Proficiency Guide skill breakdown for this<br />
speciality.<br />
63<br />
,<br />
. . , - _. . ._<br />
’<br />
--<br />
.<br />
--<br />
. .: ___.. _ -.... __ ._, _.-. _) -.-- * . ..A-. _- ..-<br />
v
Table 2 (Cont’d)<br />
;I Week Productive Trainee Hours (Mean Response)<br />
,<br />
1 14. a7<br />
4 17.92<br />
a 23.44<br />
12 27.25<br />
16 30.78<br />
20 33.21<br />
Responses Based on Supervisor’s Experience Previous Week:<br />
Item Content h’lran Response<br />
Week of Training for Trainee<br />
Percent of Training Completed<br />
Percent of Skills known prior to OJT<br />
Days of delay in start of 3-level O.JT<br />
Trainee Hours OJT<br />
Instructor Hours<br />
Record Keeping Hours<br />
Days of delay in start of 3-Icvcl OJT<br />
Percent of skills known at time of arrival at unit<br />
(TTS Grads)<br />
Additional weeks to 3-ievel Proficiency<br />
,<br />
lo.88<br />
56.21<br />
10.93<br />
9. 16<br />
9.07<br />
ifi. 88<br />
1. 88<br />
6.71<br />
29.57<br />
5.05<br />
Responses Based VII Supervisor’s Record -Keeping for one Week:<br />
Item Content<br />
L<br />
Mean Response<br />
Number of Trainees 1.3<br />
Trainee hours instruction 10.61<br />
Trainee Hours productive 24.73<br />
Instructor hours/trainee a. a9<br />
64
Cost Factor<br />
Table 3<br />
Sunmary of OJT ‘Costs<br />
Co&/Trainee<br />
Trainee Time . $ 579.38<br />
Instructor Time<br />
591.35<br />
Delayed Entry<br />
196.05<br />
Records Mawgem? nt 137.70<br />
Remedial Training 40.03<br />
Equipment and Material 19.18<br />
65<br />
s1545.49
Tables 1 through 6 summarize the more obvious conclusions concerning<br />
the three survey approaches used. When sample size is no problem and<br />
irhen information specific to the skill groups is not nqeded, the survey<br />
approach used should be that of Table 6. When sample size is a problem,<br />
the ‘methodology of Table 5 would suffer from the same problem as that<br />
of Table 6, so that a form of the original survey approach appears more<br />
advantageous. Most limitations on sample size can be overcome hy<br />
extending the time span of the survey. /<br />
Selection of a survey approach thus involves trade-offs. The Phase I<br />
report of this contract recommends use of the general survey approach<br />
of Table 6 while retainins some of’the questions used in the original<br />
methodology. This synthesis of the survey approaches was used in<br />
Phase II of the contract and appears as Appcndis A of this gaper,<br />
66<br />
;:<br />
: :-
Table 4<br />
Original Survey Approach - Responses Based on Supervisor’s Total<br />
Experience<br />
Advantages (compared to alternative’methodologies):<br />
a. Larger sample at any point in time<br />
b. Bl’eaks out information by skill groups specific to speciality<br />
or job performed<br />
c. Can be completed in one sitting - minimizing turnaround<br />
and survey control workload<br />
Disadvantages:<br />
a. Speciality - dependent<br />
1. makes aggregation of OJT cost data across specialities<br />
more complex<br />
2. requires that a different survey form be developed for<br />
each speciality<br />
3. rcquircs multiple formats for mair??enance of OJT<br />
cost data.<br />
b. High response variation, resulting in relatively hil:h variance<br />
around OJT cost estimate. ;<br />
67
Table 5<br />
- -<br />
-Alternative Survey Approach - Responses Based on Supervisor’s<br />
Esperience During Previous Week<br />
-<br />
Advantages:<br />
a. Can be completed in one sitting;<br />
IL Speciality-Independent;<br />
c. Variance reduced somewhat compared to original methodology;<br />
d. Recall required is for more recent experience.<br />
Disadvantages:<br />
a. Provides no data on costs related to specific skill groups;<br />
b. Requires presence of CJT trainee during previous week,<br />
which may result’ in a reduced sample size for any point in time.<br />
i<br />
68<br />
.<br />
‘.. -.<br />
-..<br />
’ .<br />
_ .-- ~ . . ; ., I . . . .(.<br />
pd<br />
. .<br />
-.<br />
-.._.<br />
. -<br />
. ~<br />
,<br />
.
Table 6<br />
blternative Survey Approach - Responses Based on iecord-Keeping<br />
for One Week<br />
F<br />
Advantages: i<br />
a. Generally reduced response variance compared to other survey<br />
approaches;<br />
::<br />
b. Speciality - independent;<br />
c. Recall involved limited to one day of work.<br />
Disadvantages:<br />
a. May have reduced sample size due to need for currently<br />
enrolled trainee;<br />
b. Turnaround time is longer;<br />
1. may result in reduced sample size due to increase in<br />
time in tne field;<br />
I<br />
2. will require more record keepfng by survey control.<br />
� �<br />
69<br />
‘.<br />
.<br />
,<br />
. -<br />
. . -,.<br />
�<br />
I’<br />
.
.<br />
Phase II of the OJT costing contract is nearly completed. During<br />
this phase the contractor utilized the survey techn’iquc developed in<br />
Phase I to collect cost data for OJT to the 3-level for the following<br />
specialities: Fire Protection, Pavements Maintenance, Fuels,<br />
XIatcriel Facilities, and look.<br />
The results of this phase will soon be ready for publication. Persons<br />
interested in these results should be able to obtain information from<br />
Nanpower and Personnel Systems Division of the XFHRL in the near<br />
f u t u r e .<br />
IV. Qlr:lLITY GF .THE TRAISIXG METHODS -<br />
The OJT costing work described above shows that the Air Force<br />
can and is obtaini-ng OJT cost estimates compatible for comparison with<br />
TTS co. t estimates. One assumption required for a straightforward<br />
comparison is that these two methods of training provide trained<br />
personnel of equal quality in terms of their productivity on the job.<br />
Cost estimates which included ‘quality’ in some manner would be<br />
tenuous figures based on unsupported assumptions because measures<br />
of quality useful for this purpose d:, not exist. Hence, cost estimation<br />
and quality comparisons are treated separately.<br />
The following discussion of methods of quality comparison is a<br />
summary of some work completed at Manpower and Personnel Systems<br />
Division, AFHQL. A thorr\ugti analysis of the ‘quality’ problem is in<br />
preparation and should appear as an AFHRL technical report within a<br />
year.<br />
The problem of comparin g the quality of training metllods in the<br />
military context deserves a thorough theoretical discussion of the<br />
impediments faced by researchers. It ~7s thought expedient to publish<br />
some interesting data in this article with only a short discussion of<br />
problem traits to allow interested persons an earlier glimose of the<br />
results.<br />
previous research concerning this subject is limited. One exercise<br />
was the aut_hor’s first simple comparison of mecan Speciality Knowledge ’<br />
Test (SKT)~ scores, which essentially showed no differences between<br />
the performance of OJT a.id TTS graduates on the SKT for the Communications<br />
Center Operations specialty. This may have been indicative of future<br />
5-D-n, 1972<br />
i<br />
--<br />
70<br />
3
esults but lacked depth in that it did not account for related variables<br />
that may have some concomitant effect on SKT scores, such as the<br />
pre-training abilities of the trainees.<br />
Another AF’HRL study, The Road to Work: <strong>Technical</strong> School<br />
Training or Directed Duty Assignment?, by Mr. W. B. Lccznar,<br />
used several criteria which are thought to be related to performance,<br />
including: Job Difficulty CompositeG, Number of Tasks Performed<br />
(range l-372)7, Average Task Difficulty per Unit Time (range l-7)7,<br />
and operationally prepared Airman Performance <strong>Report</strong> (range l-9)<br />
overall evaluations. Several variables thought to cause variation in<br />
these criteria were used in addition to OJT/TTS as grouping in an<br />
analysis of covariance or? ” . . . a two treatment-one concomitant<br />
variable multiple regression model. ‘*8 Analyzing these results,<br />
Mr. Lecznar concluded that, “The inclination is to say that in these<br />
specialties formal resident technical training provides little or no<br />
advan!age over on-the-job training. ” 9<br />
Airman Proficiency Ratings (APR’s) would appear to be a<br />
‘natural’ ;Oi. rleasuring some aspects of performance, but they<br />
currently suffer from such a high degree of inflation that their<br />
lack of variation severely limits their potential for this purpose.<br />
The Navy has qlso done some interesting work in developing<br />
productivity indices. lo This concept may eventually be a fruitful<br />
avenue for future work but a few shortcomings must be overcome.<br />
0x-W prfJbk?Il? is that it will be difficult to account for changes in the<br />
quality of output or in the type of output. In addition, output level<br />
in many work centers in the .4ir Force is strongly related to the<br />
demand for the output, which could be a troublesome variable to<br />
account for in a productivity index.<br />
A logical attack for this problem would be to select the best<br />
measure of productivity, performance, or proficiency, a priori,<br />
and then accept the results obtained with observations on that<br />
measurement. State-of-the-art for this type of measurement is<br />
6 hleade kChrista1, 19r<br />
7<br />
8<br />
LcczIlciI’, 1972,<br />
Ibid, 11. 3<br />
11.2<br />
9<br />
10<br />
Op. Cit., p.<br />
BUNAVPERS,<br />
9<br />
196~1<br />
71
still primitive, so measurements used are those available, which -‘.. :!<br />
ii-.<br />
are usually designed for another purpose. This is true of<br />
c<br />
Specialty<br />
Knowledge Tests (SKT’s).<br />
f<br />
SKT’s are used operationally by the Air Force as promotion<br />
criteria. They are specialty-specific paper and pencil tc6ts updated<br />
frequently by Air Force supervisors with current experience ir, the<br />
specialty. An SKT score for an individual may be said to indicate<br />
that individual’s potential for performance or productivity within<br />
the given specialty. SKT’s are taken for promotion purposes, thus<br />
all with a score for a particular sKT..had a common motive for<br />
excelling on the test.<br />
If either OJT or TTS were ‘superior’ to the other, one would<br />
expect this to be manifested in the graduates’ performance. Studies<br />
discussed above found no evidence to support the hypothesis that<br />
OJT and TTS differ in the quality of their braduates. Because SKT’s<br />
are specialty-specific, used by the Air Force for promotion, and are<br />
taken by both OJT and TTS graduates with common motives for<br />
excelling, SKT’s may be useful for examining this hypothesis.<br />
Thus, SKT scores were used to compare the specialty !cnowlcdge<br />
of OJT and TTS graduates who received initial upgrade training in the<br />
same specialties for which O.JT cost data has been or is being collected.<br />
The research hypothesis tested was that OJT and TTS graduates are<br />
of equal quality. Operational testing of this hypothesis was conducted by<br />
comparing the individual SKT scores of OJT graduates with those of<br />
TTS graduates, holding constant other factors thought to be related<br />
tc SKT scores. The rationale i%r including or excluding ‘related’<br />
variables will not be developed here. Analysis ‘of covariance was<br />
used to test the operational hypothesis and also to test whether the<br />
‘related’ variables explained some of the variance of the SKT scores.<br />
Table 7 lists the variables included for each test specialty. Separate<br />
analyses were conducted for the specialties lis’ed in Table 8.<br />
f<br />
If<br />
72<br />
. .<br />
. ,.,
Variables<br />
-<br />
SKT Percent Right<br />
TESTIM -<br />
Table 7<br />
I\ Variables Used in the Analysis of Covariancc<br />
w - p - - - - - - - - - - - - - -<br />
Description<br />
Individual’s first score for Test Specialty<br />
Days between dat.e of enlistment and date<br />
of test administration<br />
NEGRO Categorical: 1 if Negro, D if other<br />
FE,.iALE Categorical: 1 if Female? C if other<br />
ADMISISTR:ITIVE AI<br />
GENERAL AI<br />
MECHANICAL AI<br />
Aptitude indices as computed from the<br />
Airman Qualifying Esams tnkcn prior<br />
to enlistment. Values arc in raw<br />
score form.<br />
TEST REVISIOS NUMBER SKT’s are systematically updated. This<br />
is a set of categorical iarinbles indicating<br />
which test form an individual<br />
received a score for.<br />
2<br />
OJT<br />
O.JT X ADMIN AI<br />
TTS X ADMIS AI<br />
OJT X GES AI<br />
,<br />
TTS X GEN AI I<br />
OJT X MECH AI<br />
TTS X MECH AI<br />
Categorical variable indicating method of<br />
initial trairing.<br />
1 of OJT, 0 if TTS<br />
’ Interaction between OJT variable and<br />
Admin AI score<br />
Interaction between TTS variable (1 if TTS,<br />
0 if other) and Admin AI SCOW<br />
Interaction as above<br />
Interaction as above<br />
Interaction as above<br />
Interaction as above<br />
73
Air Force Specialty CodelJ<br />
551X6<br />
571x6<br />
622X0<br />
Table 8<br />
Air Force Snecialties Analvzed<br />
‘Description<br />
Pavement Maintenance<br />
Fire Protection<br />
Cook<br />
63 1x6 Fuels, petroleum<br />
647X9 Materiel facilities<br />
702x9<br />
Null Hypothesis<br />
- -<br />
1<br />
Table 9<br />
Administrative Specialist<br />
Summary of Hypotheses<br />
Variables Tested<br />
- -<br />
2 01-r<br />
OJT x (appropriate) AI<br />
TTS x(appropriate) AI<br />
3 OJT x (appropriate) AI<br />
TTS x (appropriate) AI<br />
O J T<br />
ciiAlso,TeSSPeC~’ lhe ‘X’ in each code is reulaced bv a 4 or 5<br />
depending on-whether the ?KT was for promotion ‘tc E4 or’ E5 in that<br />
specialty. Both of the 4 and the S-level SKT’s were used for all<br />
specialties in the Table.<br />
74
.<br />
For each test specialty three regressions were computed in the<br />
following order and format: I;<br />
i<br />
.I. SKT = BI(OJT) .‘ B2(NEGRO) + B3(FEMALE) A Bq(OJTxAI) i<br />
BS(TTSxA1) * BS(TESTIM) t B7(TEST FGlX$<br />
2. SKT = BI(OJT) + B2(NFGRO) + B3(FEMALE) 1 Rf(TESTIM) +<br />
B7(TEST FORM) i BB(AI) ,!<br />
3. SKT = B2(SEGRO) 4 B3(FEMALE) r. BS(TESTIh!l + B-;(TEST FORM) i<br />
Bg (AI)<br />
Three F tests were comlxted based on .k? increase in error sumsof<br />
squares between, respectively, regressions 1 and 2, 2 and 3, and 1<br />
and 3.<br />
Null hypothesis numbe.r 1 concerned whcthcr inclusion of the two<br />
interaction variables, ‘OJT s AI’ and ‘TTE s AI’, resulted in significant<br />
red :stion in the error sum of squares. These variables provide<br />
for the nossibility that ability, as measured by an appopriate Aptitude<br />
Index, has a -differential - impact on SKT scores obtained by OJT and<br />
TTS graduates. Thisis also sometimes calldd a ‘slope’ test. l2<br />
I<br />
The second hypothesis concerned whether the O.JT dummy<br />
variable significantly ~mpro?ed the error suni of squares. The regression<br />
did include a!1 appropriate Aptitude kdes score. but not in<br />
interaction form. This is the ‘intercept test. Rejection of the null<br />
hypothesis would support a contention that OJT and TTS graduates<br />
have different average SKT scores, holding other variabies constant.<br />
The final null hypothesis provided an ‘ov~;tlI’<br />
I<br />
test. It tested<br />
whether inclusion of both - the - interaction var$b!cs and the OJT variable<br />
resulted in a signif:cant reduction in the error s;1m ot squares.<br />
f2 The tcchniquiused% ??%putationa:ly si/kilar to that discussed in<br />
Lecznar, i372. Wonnacott & Wonnacott, pp 77-79 and -Johnston, .. -.<br />
pp 192-207, contain discussions of analysis of covariance using<br />
multiple linear regression as a computational approach,<br />
, ..’<br />
75
Table 19<br />
F Ratios for Test Specialties for Which the Rebuctiox<br />
in Error Sums of Squares Due to OJT/TTS Variables were not<br />
Statistically gynificantl3 1)<br />
Test AFSC<br />
55 140<br />
55i3z)<br />
57140<br />
5715$3<br />
62240<br />
6225z114<br />
6315014<br />
6475D2<br />
792502<br />
H 0<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
1<br />
2<br />
3<br />
DFNUM<br />
1 1<br />
2<br />
1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
p 1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
1<br />
1<br />
2<br />
I<br />
I<br />
121 2.42<br />
122 3.30<br />
121 2.88<br />
1494 0.07<br />
1495 -0.65<br />
1494 0.38<br />
2710 P. 27<br />
.2711 a.89<br />
2719 II. 57<br />
i1914<br />
‘1915<br />
i 1914<br />
I<br />
621<br />
.I 622<br />
621<br />
2983 2982<br />
I 1428<br />
1429<br />
9.97<br />
4.37<br />
2.22<br />
2.46<br />
1.32<br />
1.89<br />
4.19<br />
5.77<br />
4.98<br />
6458 9.61<br />
6459 P. 28<br />
6458 P. 44<br />
1.39<br />
4 . 0 3<br />
2.67<br />
5.55 -<br />
2.38<br />
3.97<br />
13 At them%%r---<br />
14 TELTIM was included in these regressions. This variable was excluded<br />
from other F tests because it was not statistically significant at the -01<br />
level.<br />
- 76 c
,P<br />
Table 11<br />
Test Specialties for which OJT/TTS Variables<br />
Proved to be Statistically Significant15<br />
63140 -1,2716 -2.29 4.40 - .20.' -6.36 - - 39.42 .179 1 2509 26.66<br />
64740 -2 . 29v -1.71 - .39 - 4.66 - - 31.41 .246 1 2411 30.56<br />
=: 70240 2.63 -1.27 -0.72 - - -5.54 .1916 .2416 28.47 .203 1 10716 14.80<br />
NOTE: Numbers in columns under variable names are regression coefficients.<br />
VQ<br />
:. . 1Sat the .Ol level<br />
16variables<br />
.<br />
for<br />
.<br />
which the F Ratio was computed. Data used to compute these rcgrcssions and<br />
test statistics were obtained from files maintained at Personnel Research Division, AFI!RL.<br />
Records, or observations, used were for personnel who, up to the time they took on SKT,<br />
served only in the specialty for which that SKT was written. Final computations were<br />
completed under Project 6323, Task 0205, Study 4903.
_-.<br />
. .<br />
._A<br />
.-.<br />
Table 9 summarizes these hypotheses. Table 10 lists the test<br />
specialties for which the F ratios would not ‘allow rejection of the<br />
null hypotheses. For these test specialt~~no differences were found<br />
between the SKT scores of OJT graduates and those of TTS graduates.<br />
Table 11 displays regression and correlation statistrcs for the test<br />
specialties for which one or more of the null hypothesis was technically<br />
rejected. Figures 1, 2, and 3 show the data of Table 11<br />
graphically, holding constant the dummy variables NEGRO, FEMALE,<br />
and TEST FORM.<br />
Figure 1 illustrates the results for SKT test specialty 63140,<br />
Fuels Specialist. Acceptance of the second hypothesis means that<br />
there was a statistically significant difference in the performance<br />
of QJT and TTS graduates on the 63140 SKT. Ability as measured<br />
by the Mechanical AI apparently has the same relationship with SKT<br />
scores for both OJT and TTS graduates in this specialty.<br />
The same results obtain for test specialty 64740, Materiel<br />
Facilities Specialist, escept the difference in mean scores as<br />
evidenced by the “OJT” coefficient is slightly larger. The diq>lay<br />
would be similar to Figure 1.<br />
Test specialty 79240, Administrative Specialist, is interesting<br />
in comparison because not only is the mean SKT score higher for<br />
V’OJT*’ graduates, but also ability may have a differentiZl%ii&t on<br />
OJT and TTS graduates. Figure 2 illustrates these results.
.<br />
Figure 1<br />
FUELS SPECIALIST ‘!<br />
SKT -<br />
A<br />
39.42 _<br />
37.15 _<br />
I<br />
b<br />
TTS<br />
1’<br />
> I - a<br />
- - - - - -<br />
Lr--‘; -..<br />
OJT<br />
I<br />
- --_-. --_<br />
Figure 2<br />
ADMIXISTRATIVF<br />
- - -<br />
SPECIALIST<br />
SKT<br />
TTS(Slope = .24)<br />
b<br />
.- -_ - - - -. .<br />
fz sY’-‘- I<br />
Figure 3 l-<br />
ALL OTHER SPECIALTIES A&LYZED<br />
-SRT.-------<br />
OJT<br />
M e a n 4<br />
SKT _<br />
Score<br />
I<br />
I<br />
Dilference = 1.27<br />
r<br />
MECH Af<br />
GES AI 3<br />
b AI %=
Figure 3 shows how the other test specialties would have appeared<br />
on the graphs - no difference in intercept or slope, The OJT and TTS<br />
lines are coincident.<br />
The differences in SKT scores found for 0,JT and TTS graduates<br />
arc statistically significant, but small. Larger differences might<br />
!UVC justified a strong statement conctbrnmg the quality of OJT relative<br />
lo TTS. XT scores ;11 tjcst I~C~IS~I~C only performance potential<br />
wilhu2t considering otht>r (.a *i!lI)otlents 0i performance, so snipe<br />
fcrenccs in SKT scores tlh UX~U~C~VCS have limited implications.<br />
Thcsc results provide no evidence to support a contention that OJT<br />
and TTS graduates differ in quality.<br />
V. CONCLUSIONS<br />
- -<br />
Work completed and underway at AFHRL is providing the Air Force<br />
with 0,JT cost estimates compatible for comparison with TTS cost estimates.<br />
This is a valuable first step hut these cost data have limited use<br />
by lhcn~selves. They must be used in conjunction with measures of<br />
other criteria.<br />
Quality of training received is one of the criteria not incorporated<br />
into the cost estimates. Individual performances on Specialty Knowledge<br />
Tests were used in this paper to compare the quality of OJT and TTS<br />
graduates for a limited number of specialties. The differences in SKT<br />
scores found using analysis of covariance, were too small to allow<br />
rejcctiorl of the research hypothesis that OJT and TTS graduates are<br />
of cclual quality. There is clearly no alarming difference in quaJit$<br />
as measured 11s SKT scores for the specialties analyzed. One may<br />
not generalize either the cost results on the SKT results to other<br />
ZiEcinltics.<br />
Future work concerning OJT/TTS tradeoffs should include improved<br />
OJT cost estimation, repetitions of the SKT analyses of this paper for<br />
other specialties, development of a computerized algorithm for assigning<br />
non-prior service persomel to OJT and <strong>Technical</strong> Training Schools, and<br />
dcvelopnlent of metliodologier. for determining the capacity of units to<br />
conduct OJT. This last item deserves special emphasis because it is<br />
the key to efficient utilization of !Air Force training resources.<br />
AN Force operational units have manpower standards against<br />
which OJT trainees are drawn. Each additional trainee replaces’ a<br />
more qualificcl individual and requires the time of other personnel<br />
to conduct 0,J’l’. Assignment of an excessive number of trainees<br />
80<br />
?
would result in unacceptable degradation of unit productivity. Even<br />
if OJT were less %ostlyV1 and produced equally well “qualified”<br />
graduates, there is a limit beyond which either TTS would have to<br />
be used or else manni=tandards would have to be changed.<br />
Training costs form a large par? of the DoD budget. Since the<br />
near future holds only budget cuts, one may anticipate increased<br />
interest in establishing procedures and data bases for selecting<br />
optimal OJT/TTS mixes. AFRRL research in this area provides a<br />
sound basis for continuing development of these methodologies.<br />
i<br />
81<br />
._ .-. . ,
_<br />
I I<br />
/<br />
REFERENCES<br />
f<br />
i<br />
i<br />
Dunham, Alan D. - The Estimated --T-.-e C;st of On-the-Job Training to the<br />
S-Skill Level intheommumcatronsCenter<br />
Operations-q<br />
r&IRL TR 72-56, Lackland AmTexas: Personnel Research<br />
Division, Air Force Human Resources Laboratory, June, 1972.<br />
Bureau of Naval Personnel - Manpower Allocation - and -Productivity<br />
Measurement Models, AD?i)l 3OJ, 1969 .<br />
Johnston, J. - Econometric Methods, 211d Editron, New York<br />
McGraw-Hmm.-<br />
, Lecznar, William B. - The Road to Work: <strong>Technical</strong> School Training<br />
or Directed - Duty - Assignmenn-KI%mm-72-29,<br />
Texas: Personi%%mDivision, Air Force<br />
Lackland=,<br />
Human Resources<br />
Laboratory, April, 1972.<br />
Meade, D. F., & Christol, R..E. - Development of a Constant Standard<br />
Weight Equation for Evaluating-J%%Difficulty, AFHX-TR-7P-444,<br />
m-5. Lackland AFB, Texas:?sonnelDivision,<br />
Air<br />
Force Human Resources Laboratory, November 1976.<br />
I<br />
Wonnacott, R. J. & Wonnacott, T. H. - Econometrics, New York:<br />
,John Wiley & Sons, Inc., 1979. - -<br />
I<br />
- -<br />
__ , -<br />
:<br />
82<br />
r<br />
..i<br />
‘ -4,<br />
: ,.<br />
: I’<br />
.
APPENDIX A<br />
OJT Cost Survey as sent to USAF<br />
supervisors with trainees upgrading to<br />
the 3(serniskilled)-level in the first five<br />
specialties listed in Table 8. The surveys<br />
were administered during the third quarter<br />
of 1973.<br />
83<br />
3
FOR OFFICIAL USE ONLY<br />
OJT COST SURVEY<br />
AFPT 80-5x6x-109<br />
This survey is part of AF Contract #F41609-72-C-0048,<br />
for which Personnel Research Division of the AF Human<br />
Resources Lab is the contract monitor.<br />
AIR FORCE SYSTEMS COMMAND<br />
BROOKS AIR FORCE BASE, TEXAS<br />
FOR QFF!ClAL :SE ONLY
-<br />
. . .<br />
, _..- -<br />
z%",'p PESE (Capt Dunham,,<br />
SUIJECl. OJT COSt Survey<br />
10: OJT Supervisors<br />
_.<br />
DEPARTMENT OF THE AIR FORCE<br />
AFHRL PERSONNEL RESEARCH DIVISION 1 AFSC 1<br />
4106)<br />
LACKLAND AIR FORCE OASE. TEXAS 78236<br />
I!<br />
/ . .<br />
. ‘.<br />
i I<br />
/<br />
i’<br />
I<br />
,I<br />
I' ,<br />
.<br />
MAY 3 1 I??3<br />
1. The<br />
~.-<br />
purpose of the attached survey(s) is to collect data concerning<br />
On-the-Job Training t0 the 3-skill level. T!lfs survey data, along with<br />
information from other sources, will be used in decisions concerning OJT<br />
and <strong>Technical</strong> Training School.<br />
2. Answering the survey questions with some thought and effort wfll aid<br />
Air Force decision makers in the management of your AFSC.<br />
3. Permission to conduct this survey was granted by i!q L'SAF/DPXOS,<br />
reference Air Force Pa ..onnel Test (AFPT) Sumbcr 60-5X6X-109.<br />
FOR THE COXXAXDER<br />
Chief, Personnel Research Pivision<br />
. . , -<br />
.--<br />
85<br />
.. .<br />
:
.<br />
. . . . . *. ’ . -. . . *’ .<br />
.<br />
� - *.<br />
z. ,- � �<br />
INSlRUCTlONS TO OJT SUPERVISORS<br />
The accompanying survey is port of a research effort directed toward<br />
evaluating the costs end benefits of “On the Job Training.” Your cooperation in<br />
completing the survey is requested. While it will probably toke less than on hour<br />
your time, the information you provide will be vx valuable to the research and<br />
help to improve Air Force policies concerning OJT and <strong>Technical</strong> Training School.<br />
If you do not quite understand a question, give the best answer you can and<br />
feel free to write in an explanatory comment next to the question or on the buck of<br />
the form. If you are completely uncertain about what a question means, enter a ‘I?“.<br />
If a question, for some reasOn, does not apply to your unit, enter “N.A.”<br />
The survey is divided into two parts: A and B. Part A asks you to try to<br />
-.<br />
make the best estimates you can about your average experience.<br />
Part 6 asks you to keep a record of activities, each day for a week. It is<br />
important that you do this d&, so that what was actually done is fresh in everyone’s<br />
mind. If you alsa feel that the week you reported on is not representative of your<br />
I<br />
normal operotions, so indicate by writing in an appropriate comment; and if you con,<br />
indicate what the average value ought to be in your judgment.<br />
If you have any questions, contact Capt Dunham, Autovon 473-4106.<br />
*.<br />
:<br />
86 ‘r<br />
.<br />
.<br />
. , : :*<br />
f i<br />
2,<br />
of<br />
will<br />
.
SPECIAL INSTRUCTIONS<br />
Q Number[ml<br />
T<br />
Upgrade Training in AFSC 1. l-m/<br />
1. The trainee’s supervisor should complete this survey. Approximately one half (l/2)<br />
hour will be required to complete Part A, and<br />
necessary for Part B.<br />
2. When answering the questions, be sure to<br />
-<br />
five minutes per day for a week will be<br />
have a Job Proficiency Guide (STS), and<br />
the Consolidated Training Record AF-623 for each person undergoing training, handy to<br />
refer to.<br />
3. The person who fills out this survey is encouraged to ask for the help of others, such<br />
as the OJT Monitor or an instructor when uncertain about the answer to a question.<br />
4. Part A which should be completed immediately, is to be returned together with Part<br />
B within 2 days. Do not start Part B before completing Part A.<br />
5. If there is difficulty in deciding what information is being asked for in any question,<br />
contact Capt Dunham, Autovon 473-4106.<br />
BACKGROUND INFORMATION<br />
NAME. I I 1 I I I I 1 i i I r-1 I-<br />
-i<br />
Last<br />
Firstitial<br />
Middle Initial<br />
GRADE If Air Force NC0 enter “4” for E4, “5” for E5, etc.<br />
If Air Force Officer enter “0.”<br />
If Civilian enter last digit of GS GRADE, e.g., “1” for GS.ll.<br />
_ ..-__<br />
SOCIAL SECURITY NUMBER 1 II n-1<br />
] ]--ii<br />
PAS CODE I-]<br />
/<br />
I<br />
,<br />
. .<br />
87<br />
,<br />
r--j<br />
, . .<br />
i<br />
‘.: _.<br />
f<br />
. J?<br />
,<br />
I<br />
i
Ji<br />
PART A<br />
1. How many trainees do you have upgrading to the 3 and 5 level in your<br />
section?<br />
2. V,%en a man(or woman) first reports directly from Basic <strong>Military</strong><br />
Training, it may take some time before he actuaily begins training and<br />
work, even though-his “date of entry“ to training may be the same os his<br />
reporting date. This delay may bc due to personnel procossiog, the need to<br />
wait for security clearance, or some other co;rse. Approximately how mony<br />
days dors it take before the newly arrived “helper” actually begins OJT?<br />
3. There is also delay in entering training associated with the arrival of<br />
a 3 level from <strong>Technical</strong> School. In addition to personnel processing,<br />
familiarization with procedures specific to your situation may be necessary<br />
before he/she actually begins 5 level training. On the avcroge, this delay<br />
is:<br />
4. On-the average, how many weeks elapse between ochicvement of 3<br />
level proficiency and actual award of the 3 skill level AFSC?<br />
5. What week of training is your most average i lcvci \hclper) in?<br />
6. What ?b of the 3 !evel proficiency training do you cstirnczte he has<br />
completed?<br />
7. When he arrived what 9; of the d-uties of o 3 level could he complete?<br />
8. If you stopped doing OJT training would you be able to reduce the<br />
number of NCO’s in your work area without significantly reducing<br />
effectiveness? (Insert a “1” for Yes, or a “0” for No).<br />
9. During the training period for 3 level OJT, the instructor (troiner)<br />
must spend some time keeping training records up to dotc. On the<br />
average over the who!e train:ng period, how many hours (or fractions of<br />
one trainee?<br />
does the instructor (trainer) spend in record keeping for<br />
I<br />
SO ON TO NEXT PAGE<br />
:<br />
III u<br />
3 level 5 level<br />
2<br />
trainees \ trainees<br />
!<br />
work days<br />
Iwork days<br />
L2 I--.1 -<br />
LQ<br />
--<br />
UJ<br />
:‘0<br />
-.<br />
El<br />
96<br />
r I<br />
I I 1<br />
Hrs.<br />
.
Port A (co&d)<br />
I -. .<br />
10. The newly arrived Tech School-trained 3 level is not as productive<br />
at first as the OJT-trained 3 level is, although he may soon close the gap.<br />
a. In your estimate, what percentage of the worklocci of an OJTtrained<br />
3 level can the Tech School graduate handle immediately<br />
after his arrival?<br />
b. How many weeks does it take bcforc the Tech School-trained<br />
3 level works with as little supervision as an OJT-trained 3 level?<br />
c. After both types of 3 levels are awarded their 5 level, on the<br />
overage do you consider either to have superior performance?<br />
(Inset-t a “1” for Yes, or a “0” for No.)<br />
d. If your answer was “yes, ” which type of 3 level do you<br />
consider to have better performance? (lnscrt a “1” for OJT, or a<br />
“0” for Tech School .)<br />
‘1. if extra (remedial) training is conduct4 in your office for trainees<br />
who fail the End of Course Exam (Aoprcnticc Knowlcdgc Test), answer<br />
the following questions:<br />
a. In your experience, what pcrccnt of the 3 lc.~el troiqees foil<br />
the End of Course Test the first time they tokc it?<br />
b. On the average, how many wccLs of additional troiring are<br />
given to airmen who fail the End of Course &an before they take<br />
the test again?<br />
c. How many hours per week, during the normal work week, does<br />
the trainee spend engaged in tfris remedial training?<br />
d. How many hours per week, .Iu:ins the normal work week, does<br />
the instrejctor specc! engaged in this remedial troining?<br />
12. If you stopped doing OJT training and hod no replacements for the<br />
troinecs could your section continue to perform its c;issio:i without<br />
significantly reducing effectiveness? (Insert a “1” fcr Yes, or a “0” fcr<br />
No.)<br />
i<br />
89<br />
GO ON TO NEXT PAGE<br />
. -..<br />
I .<br />
r<br />
. .‘/<br />
3<br />
7<br />
I Percent<br />
cl.2<br />
tieeks<br />
El<br />
-.<br />
L -I<br />
I-l-1 - .-<br />
Percent<br />
- - i<br />
13 . .-!<br />
Weeks<br />
cl- 1<br />
’ HrT.-<br />
-- -<br />
Ill _- ---<br />
Hrs .<br />
cl
Part A (cont’d)<br />
i<br />
/<br />
Q Number \Jl--ill<br />
r<br />
13. Based on your past experience, and, if you feel you need help, the experience of<br />
other qua1 ified personnel in your section, list the average number of productive and<br />
non-productive hours of work for the trainee upgrading to the 3 level for each week<br />
between start of training and award of skill level. For instance, in the fourth week of<br />
training your trainee spent approximately 30 hours receiving instruction and reading ond<br />
10 hours doing productive work. Your secdnd entry would look like this:<br />
4<br />
Note that the hours for each week must sum to 40, and you must have an entry in every<br />
week. If, on the average, trainees complete tra’.?ing between the 12th and 16th weeh,<br />
then the entry for th4 sixteenth week should show a “40” under productive and a “0”<br />
under instruction.<br />
Weeks of Training Trainee Productive<br />
(to the S-level) Hrs Per Week<br />
1<br />
4 I22 I<br />
8<br />
12<br />
,<br />
\<br />
Instruction 8, Reading<br />
Hrs Per Week<br />
L- ‘--I<br />
-; r-m<br />
-- --_<br />
.1I<br />
I<br />
1<br />
1<br />
i<br />
I i<br />
iii<br />
16 I ---~-I<br />
. . . _ ._ EJEI<br />
20 c l --i I<br />
Li.<br />
14. What is the total number of personnel in your secti/on (officer, enlisted,<br />
and civilian)? I<br />
15. In addition to the trainees you now have responsibility for, how many<br />
more 1 level trainees could your section train right niw without<br />
significantly reducing the effectiveness of section o&rations?<br />
(ignoring the limit on aurhorizcd number of personnel)<br />
16. If you had to lose a qualified 5 level for each new 1 level trainee<br />
(helper), how many 1 level trainees could your unit train right now<br />
without significantly reducing the effectiveness of section operations?<br />
90<br />
GO ON TO NEXT PAGE<br />
il<br />
no. gf-@s.<br />
n- 1 level<br />
trainees<br />
rl<br />
1 Clel<br />
trainees
PART B<br />
FOR OFFJCIAL USE ONLY<br />
FOR ONE WEEK PLEASE KEEP A RKORD AT THE END OF EACH DAY OF THE<br />
AMOUNT OF TIME SPENT IN EACH CATEGORY.<br />
17. How many DDA airmen do you currently have enrolled in<br />
OJT to the 3 level?<br />
18. Record dail.y the total hours your 1 level<br />
trai..ses spend on reading and receiving<br />
instruction each day. (Be sure and ask your<br />
trainees for their assistance in completing<br />
this question).<br />
Man-<br />
T”es<br />
m 1 r-y-;<br />
- ..-<br />
I I<br />
AZen<br />
19. Record dairy the total hcurs your 1 level - - - - p 1-1 -7 y--J- I ” i<br />
trainees spend in activities contributing to 1 -.--2 ’ ’ 1 __I _ _’ . _.__ i<br />
office productivity.<br />
20. Record daily the total hours of<br />
instruction provided by each grade of<br />
instructor.<br />
L<br />
Q Number I-l-TT--ll ._<br />
E4 :’ 1 : i<br />
I---!<br />
-.-p<br />
---I<br />
I<br />
-__I<br />
-77 I i iTj<br />
AFTER YOU HAVE COUPLETED THE E9 p _. i--’ 1 i.. j:T1..1 [J-y .--’ \- t---J 1 .I[_1<br />
ENTRIES FOR FIVE DAYS RETURN<br />
THE SURVEY TO YOUR BASE GS-5 i ,.-!.--l i::~i *c--n 1371 n--l<br />
CBPO<br />
FOR OFFICIAL USE ONLY<br />
GS-6 n-j ::izj cl-: I -1 .I 1 .1-i<br />
5
SYMPOSILW<br />
TRi.“cLATION OF TRAINING RESEARCII INTO TRAINING ACTION - A MlSSING LINK<br />
In t rcduc t ory Remarks<br />
C. Douglas Mavo<br />
Naval <strong>Technical</strong> Training Command<br />
I accepted the chairman’s job in this symposium with the understanding<br />
that I would rot have to be the kind of chairman who is neutral at fmpartial,<br />
but instead that I could expose my biases on ~!le Translation<br />
of Training Research into Training Action, the same as anyore else. So,<br />
in introducing t*:e topic it should not be surprising if i express some<br />
of the thoughts that I may have been repressing during the two decades<br />
that I have been involved in training research and in translating it<br />
into training action.<br />
During this period, we have won a few. We are all familiar with instances<br />
in which the results of training research and development have<br />
been implemented into the en-going training operation and have contributed<br />
materially to it. Two examples that readily come to mind in Naval<br />
<strong>Technical</strong> Training (the area of opcrntion with which I am most familiar)<br />
are programmed instruction during the decade of the 68’s and the implementation<br />
of computer based instruction which is underway on a substantial<br />
scale at the present time.<br />
But for every R&D project that has had an impact upon training there are<br />
numerous ones that have not. Xov it-can be argued that this is toe nature<br />
of the beast, that risk taking is an inherent and necessary part of the<br />
R6D process. No doubt this is true, hut I submit that the normal risks<br />
associated with RCD do not even begin to account for the unused, and in<br />
some instances unusable, volume of trafning research.<br />
Much of the systematic civilian work in the area of translating educa-<br />
Lional research into educational change has been accomplished under the<br />
rubric of “linear change models in education.” Xost of these models<br />
describe a linear sequence of functions that include: research (the function<br />
in which new knowledge is produced), development (in whfch a product<br />
or procedure based on the new knowledge is engineered and evaluated), diffusion<br />
(in which the generality and extent of applicability of the prod.lct<br />
or procedure is explored), and adoption (the function of implementing an<br />
appropriate form of the product or procedure in an on-going educational<br />
situation). It is generally conceded that in order for such a model to<br />
work, at least one of two conditions must exist; either the people involved<br />
in the linear sequence of functions (i.e. research, development,<br />
, t:’<br />
/<br />
I<br />
92<br />
,;-<br />
‘I<br />
.<br />
: _..<br />
. I<br />
\<br />
‘. .
.<br />
‘\<br />
diffusion, and adoption) must be intrinsically motivated toward the<br />
common goal of improving education by means of research, or they must<br />
be extrinsicallv motivated by accountability with resjject to accoaplishing<br />
the common goal. ?lore often than not, a combination of both<br />
intrinsic and extrinsic factors are at work in varying degrees in the<br />
instances in which the Iinear change model functions’ properly.<br />
Moving to a more specific level, and with special emphasis Upon the<br />
mili:ary situation, there are a number of conditions that contribute<br />
to the “rrissing link” in translating training research into training<br />
action. For convenience they may be divided into three groups, namely,<br />
those associated with research personnel, those associated with training<br />
personnel, and the interface between research personnel and training<br />
personnel.<br />
First, research personnel. Not infrequently research personnel conceive<br />
of a “good” study and then look for a training situation in which to do<br />
it, without seriously considering whether the findings of the study, if<br />
successful, could be implemented in that or any other school. In my experience,<br />
it is not as unusual as one would wish for a researcher to<br />
complete a training research project in which his hypotheses arc sustainrd.<br />
and yet be unable to state in what way his f'inding can be used to improve<br />
training or. training cf ficiency .<br />
A simple solution to the problem would seem ;to be to state in advance for<br />
each applied research project what action should be taken in the event<br />
that any one of several possible outcomes of the study should result.<br />
Carried to its logical conclusion this.should prevent the researcher’s<br />
wondering what to do with I%.s results, upon !conplction of his research.<br />
Sometimes the researcher does not wonder whit to do with his results<br />
because he does not feel that it is his responsibility to point 3ut<br />
practical implications of his research, that his responsibility is linited<br />
to properly conceiving, Jesigning, conducting, and reporting the<br />
study.<br />
What about training personnel then? Are th, 1 y not eagerly awaiting any<br />
research results that will improve their tr/aining operation? Well, yes<br />
and no, but mostly no. They are busy with/the day to day pressures of<br />
conducting training. Both having the research conducted in their school<br />
and implementing the results tend to disrupt the training operation and<br />
Pil? additional work on a staff that alrcidy feels that it is overloaded.<br />
Often training personnel feel that the R$,D was not their project, that<br />
any credit that might accrue from it is likely to go to the researchers ”<br />
or to a higher echelon in the training organization, and it is just hard<br />
for them to get very enthusiastic about it. Besides, several months often<br />
elapse between the time the data were collected and the time the more<br />
or less unintelligible report arrives.<br />
93<br />
. . . - 1<br />
1<br />
.* i I,<br />
ir
This lra~s ;IS to our third set 01’ conditions L tla; ten.!:; to affect the<br />
production of usable research and the proper USC’ of ir, tikc interface<br />
between research personnel and training personnel. This may well bc<br />
the area in which the greatest source of our t.rouble lies. Too often<br />
research personnel do not accompany the research report when it is<br />
submitted to training people, do not make a prcse:ltatitin to training<br />
people concerning it , and do not a.ssi,t in its ir,?lcme2tdtir~n.<br />
Clearly, it takes much more initiative on the p.irL of L!le trainfng<br />
people ta deciJe to make changes in their training operation on the<br />
basis of a report than it does 1.0 reach a c!ccisic:l to t3ke :~c~ion<br />
when the ii;iLiior of tile repot-t Jcscribes thr sLudy, cl:::-i!‘ics potnts<br />
that are n,?t understood, and makes logical rc~oxxnciatitins in ;I Cacc<br />
to face situation to the people who are rcsponsibit? for efficient<br />
training i:: the course conceri:ctl. Tht prob,i!~i 1 i :;; of successful<br />
inplementat i,>n is iurtlier enh:~:~cct~i ii the rcst~,tr25er culitinuc‘s t o<br />
work wil.1 cognizant training personnel througll~.. the implementation<br />
of the research results. Normally this does not require d great de.31<br />
of time and in my view, at least, is time better spent than in utilizing<br />
this same time on another research study, with t?le prnbabil ity<br />
that the results of neither of the two studies will be implemented.<br />
We have long known that in implementing any concept, responsibility<br />
for its oc~xrring must be fixed. I ;im tliereii~rk~ suggasting that each<br />
organization engaged in applied K&D sl~ould have ;I sma!i group, rcporting<br />
directly to the Techr.ical Director ol: tO ;ltt! Colrma~din~ Cf I iccr,<br />
whose primaq responsibility is two-fr,ld, first, to Ic7nk externally<br />
to fmplementation 0i RSD products and, second, to look i.ntern.llly to<br />
the design and product ion of products that at-c capable uf being implemented.<br />
Thus , on ~ha one hand this grolup wtr\:ld er.sxrc that RSD<br />
products were properly presented to nppropri.ate Lraining personnel,<br />
normally by the research personnel who did the work, and that assistance<br />
in inplementing the products was continuck! aIs lcng as npproprfate.<br />
On the other hand, prior to approval of an nppl ii-d research project the<br />
group woul\! ;\ssess the prr>bablc s~ileabil ily oc t i.c product Lllat the R&D<br />
project was likely to produce and so advise tl~z- Technic-al Director or<br />
Commanding Officer. If the purpose oi tl~r projcci was to produce a<br />
usable product and the proposed project W;IS juJl;eJ not to bc rapable<br />
ot producing such a product, it would be refcrrt\L? ilack to the orlginator<br />
for revision.<br />
At this point I will give the other members UC tli? pace1 a c!inncc to<br />
express their thoughts, repressetl or othcrwis~:. f hope tlzt they iirc<br />
going to tell us that I a$ the :)nly cne wlicl has c5served t!lc problems<br />
Z have rrilated. that cver;rthing runs smoothly in the training world<br />
with which they are familiar. But we will hnvc to wait and see if<br />
that is in fact the case. The plan for the syr,!:.-siua is to have two<br />
prepared papers, one by a trainer (Ilr. Wal tcr ?kIL~wei 1) .ind one by a<br />
researcher (Dr. %rman Kerr) . Thr papers will r?:en !:e discusscc’ by<br />
Lieutenant Colonel Donald Yead and Dr . . Will inn >:>onan.<br />
94
In addition, if I have said anything in my introductory remarks that<br />
is worth disagreeing with or otherwise commenting on, I hope that<br />
Colonel Mead and Dr. Moonan will include this in their discussion also.<br />
Following the papers by the two discussants, we will afford an opportunity<br />
to the four members oE our Reaction Panel to make a few impromptu<br />
remarks. Upon completion of this structured portion, WC will be<br />
pleased to open up the symposium for comments or questions from the<br />
floor.. Our first speaker is Mr. Walter E. XcDowell frcm the Army Training<br />
and Doctrine Command at Fort Monroe. He will be speaking on the<br />
basic topic of Translation of Training Research into Training Action,<br />
from the Viewpoint of a Training Manager.<br />
.<br />
. , . .<br />
/<br />
x ,’<br />
.<br />
,.I _<br />
,; . .<br />
95<br />
.., I: .<br />
.
SYMPOSIUM<br />
I!<br />
"TRANSIXCION OF TRAINING RESEARCH IhTO TRAINING ACTICN - A MISSING LINK?"<br />
/<br />
From the VLewpoint of a Training Manager<br />
*<br />
/ I<br />
!<br />
Mr. Walter E. Mcbcmell<br />
Supervisory Educatio{al Specialist<br />
Headquarters US Amy Training And Doctrine Comand<br />
Fort Monroe, Virginia<br />
96<br />
I
II<br />
When given the requirement to propose Human Factors-Behavioral Science<br />
research for the coming year, we are faced with a soul-searching situation<br />
in which we must ask ourselves two questions: one, are we really getting<br />
the taxpayers' dollar benefit from the work that has transpired in the past;<br />
and, two, will the research we are asking for be likely to yield productive<br />
results in the future. Another question which also must be scrutinized when<br />
it surfaces is, if we aren't getting a masimum return on our investment in<br />
research, how can KC, as trainers and managers, translate the results of<br />
future research into better returns on our investment dollars? First of all,<br />
d it is not purely 3 translation problem. The process of bridging the gap<br />
between the ccnduct of rcscarch and the implementation of rcscarch, by the<br />
Army trainer, is a continuing one. It begins while the problem is being<br />
defined to both the researchers and the trainers; it continues during the<br />
conduct of the research; it continues while the final report is being<br />
drafted, and it- still continues long after the final report has been published.<br />
Let's b:Rin at the beginning. In avery real sense, utilization begins at<br />
-.<br />
the time tl~c probla to be researched is presented. The training problem must<br />
be carefully defined and documented to insure that both the researcher and the<br />
trainer arc speaking the same language and that the researcher fully understands<br />
the problem that the trainer believes he has. After the trainer submits his<br />
request for rcscnrch, the researcher must then submit a statement to the<br />
trainer on what he views the problem to he and how he intends to seek out the<br />
solution. A common understanding and agreement of the problem to be researched<br />
is essential if a usable end-product is to be the result. The research approach<br />
may have to be modified in the light of continuing support requirements. somctimes<br />
it may have to be recast if the research proposed is not understood by<br />
the trainer to be appropriately responsive to the problem as hc perceives it.<br />
Constant inter&tiodbetween researcher and trainer takes place during the<br />
conduc: of the work. Cor;lmunication is achieved informally on a continuing<br />
$asis through -he USC of interim progress reports, periodic reviews, briefings,<br />
and discussions. The, trainer requesting the research must be kept ccjnstantly<br />
aware of the work taking place_, and the researcher must be kept abreast of<br />
significant military changes within the parameters of training problems. Hopefully,<br />
the interaction and open ccxzmunication will keep the evolving solution<br />
directly aligned with the trainer's problem.<br />
As relevant data becomes available, that is, data which bears upon the<br />
problem, it is provided to the trainer. Such coarmunication involves both<br />
those management functions which can implement cl,ange and those responsible<br />
for actually carrying out the drcision to make the change. Management, simply<br />
dccrccing that change ie made, will not, alone, get the job done; the “notinvented<br />
here” syndrome must be overcome. The "doer" --t!re guy at the working<br />
level--must be convinced t.hat the change is good and that it will be of benefit,<br />
not only to himself, but to the prospective students and to the system, othcrwise<br />
it will be doomed to failure.<br />
97<br />
, : ._<br />
.: .<br />
2
When the report is subnitted, it is reviewed and star'fed with ap~r
Scientific and <strong>Technical</strong> Information (CFSTI). All iriiiortant are the informal<br />
and conti.r*,ing contacts between researchers and potential users.<br />
A report by Dr. William A. McClelland, at the Cor!fercnce on Social<br />
Research and ?Iilitary Euqyzment at the University ofi Chicago, June 1967,<br />
gives an insight into possible characteristics of unTuccessfu1 and successful<br />
research iinplemcntation: UNSUCCESSFUL RESEARCH EFFORTS :<br />
(1) Poor communication. Neither briefings nor reports effective<br />
communicated the validity and opccntional value of the research.<br />
(2) Lack of timeliness. The product of the research effort did not meet<br />
a valid, contcnporary rcquiremcnt. It was available too lntc or too cariy,<br />
or it was too tangential in nature.<br />
(3) Degree of change. Too many c’.nngcs in operating procedures were<br />
required. For cxamp le , training was shortcncd (or lengthened) too much, or<br />
the csisting Army structure was incompatible with the indicated change. Existing<br />
or tradititinal practice may have hccn too strongly threatened.<br />
(6) L?ck of strong command support. Not enough people at high enough<br />
cchc ions wanted to change.<br />
, ’<br />
(5) costs.<br />
not fc obtained.<br />
Funds and personnel required had not been programed and could<br />
I<br />
(6) Lack of cngintcring capability. Thd training experts required to<br />
translate the research rindings into more ~\phrationsll; usable form did not<br />
exist or were not: available+<br />
3<br />
I<br />
(7) Policy problem. There was a lack o G doctrine under which to fit a<br />
new or improved training or operatik,nal capability.<br />
:,3) Insufficient “snlcsmanship”. f’rojc L t people did not devote enough<br />
effort to “selling” the product. At one tim , for esaapic, we believed this<br />
was not the job of the research agency.<br />
f<br />
Possible reasons for successful implcmcntation I are largely the ob-zcrse oi<br />
this list. SUCCESSFUL RESEARCH EFFORTS : I<br />
(1) Timeliness. A rccoznizcd instructional rap was filled. The.work was<br />
obviously relevant to a planned or on-goin revision in Army practice.<br />
(2) Command in terts t .<br />
P<br />
There was n strong operational command interest,<br />
including that of a subordi.natc command.<br />
and working levels.<br />
Interest existed at both management<br />
(3) Engineered product. The end-product was concrete. It was a material,<br />
plup,-Ln item, specifically cnginecrcd for a given situation requiring little<br />
99<br />
.Y
. additional Army effort to adapt it to the operational setting and requiring<br />
ri no doctrinal changes.<br />
(4) Earlier acceptance by others. Some other service or civilian<br />
institution had accepted and successfully used the product or a ver; similar<br />
one.<br />
(5) Personal-interest. An individual Army officer or g,:ruup oE officers<br />
or key civilians associated with the work were convinced of its WOL-';II ,!r~ti WL-I-C~<br />
willing ~0 serve as forceful prapc'nents.<br />
:<br />
. *.<br />
100<br />
. . _<br />
: --_<br />
- . ,. . .
SWPOSIL?!<br />
"TRA!EXATION OF TRUIU'ING !tESEARCH INTO TRAINING ACTION - A MISSING LINK?"<br />
From the Viewpoint of a Training Researcher<br />
Dr. Soman J. Kerr<br />
Director of Research<br />
Naval <strong>Technical</strong> Training C0mar.d<br />
Naval Air Station Yemphis<br />
?Iillington, Tennessee<br />
;-<br />
101<br />
.,<br />
-_. . . . I ’
Perh3ps those of vou i:> trnining management thlr:k I &I you so-:C i::j~s:ice*<br />
I cJnn3 t a.grci!. ‘~Iw tralni:lrZ manager, in zy percept (on of him, .~lw.i~s Sf?tZkS<br />
answer:’ yesterd:~v to tomorrirw’s problems which he is causing to‘!.\\.. i’.t? tradition.~llv<br />
accepts 3 truixn such 3s “individualization is ‘@“’ o\:’ “trainine<br />
tine must br> rtlducc-d hv 10%” snd, vith unquestioning blindnc*ss, blunders<br />
forwdrJ wit!; 3 GrXFZKor.:p,1 innce actions. L’nqucsLian3blv, hl: ::;Is!z instfz:,-t<br />
ix-elk- know that ;mv cltlfust7.cnts .to the system will rl>sulz in sx:~ x,arvini:<br />
degree of turmoil. But it zust b> th3: he hopes the* t!w :urmof; will be‘<br />
temporary rather th;ln permanent--and he does not willin!:lv accept rhr fact<br />
that s.xh actions may he IIct ive curriculum, the training zanagct is also<br />
knokm to be cspab!c of putting top.ether a curriculum on a complex<br />
102
weapons system (for example) over night.<br />
The typical training manager will fight the instructor billet battle<br />
shemingly to the wire by a continual wailing over inadequate numbers of<br />
instructors; yet kc is verv villing to accept neu curriculum administration<br />
responsibilities for his overworked school staffs. Those xho question this<br />
should ask themselves bov rzny or in what proportions were new instructor<br />
billets provided Khen additional curriculum burdens such as human relations,<br />
defensive driving, and remedial reading programs were directed for inclusion<br />
into existing training programs. Yet, these good men seem obsessed tqith<br />
management and its precise controls, and often decline well-intentioned<br />
advice and assistance. That is putting it mildly, for it is onlv with reluctance<br />
and suspicion that zany training managers will accept a rationale<br />
or the experience of others who ore not only directly in but senior within<br />
his line organizational structure.<br />
It night be that the training manager has the utmost confidence in him-.<br />
se1 f--not necessarily in his abilities to improve the training, but most<br />
ascLredly in his zosie to be able to handle the many and varied resoonses<br />
he x.-ill derive fro:! both judEnentall\- sound and highlv questionable decisions<br />
which he has malit? in the nare of training manafzezent. After all, is this not<br />
what trail,*ng man3zezent is all about?<br />
,<br />
Have I left iut planning--or the lack thereof--by the training manager?<br />
I have not meant to. As any manager knows, planning is an essential part of<br />
his daily routine. ?!y Lord, does he pian! He has training plans, personnel<br />
pipeline plans, SER plans, CIKX?3 plans. . . Heaven knows how manv other<br />
schemata he considers in the laying out of training requirerpents and in the<br />
execution of training activities. But ask the training nanagenent community<br />
for research plans and stand by for a high fog count in the responses. Better<br />
pet, ask for definitions of training research and you’re likely to get respocses<br />
In the order of “soneti,ine not needed”, “something received after the fact”, and<br />
“sonething that is needed to verifv ;?v actions.” The training nanager, often<br />
will inp to spend hours gruzbl ing, perhaps, but unquestionablv counting the nmber<br />
of sessions in a score of curricula to develop data to satisfy an obsolete<br />
report on a hands-or. vs thecry ratio, is generally not prone to review the<br />
research conducted oc highly pertinent areas of his responsibility. Yet that<br />
research which might well enable him to better conduct the business of training<br />
may be in evidence all about him, gathering the patina of GSA dust.<br />
Early in pre?aratlJn for this s:=nposium I decided that my assigned topic<br />
“From the Viewpoint of a Training Qesearcher” p(ave me wide latitude in pinpointing<br />
--at least to the satisfaction of my own perceptions--where the problem<br />
exists in the translation of training researchnto training actions. I think<br />
that I have now zade that ooint several times over and I supposed that, since I<br />
probably have a goodly percentage of my listeners in a state of spirited agitation,<br />
I should smile broadly and back ‘slowly out of t!le nearest exit. I promise that<br />
I will do just that very soon, but first let me say that I tried to locate strong<br />
support for my conte-tion from the literature.<br />
103
-<br />
..\ . .<br />
i I<br />
Co ny dismay I found th.%t zany of the articlts on thy subject in research<br />
and development professional journals are rcallv self-deprecating. Ke in<br />
research are reJl 1y rather ;)r~ni~ to take on the problems of the world: rr)<br />
blame ourselves for another’s inability or unwillingness to comprehend. The<br />
chest beating is often, in re.s1itv, a hushed humble breast-thumping which marks<br />
the contrite, and thrre are xnu in research who continually trv to fin‘! the<br />
missing link of translating cr,tining research into traic,ing actio:, alone. Lt<br />
was amazing how spars2 v.Is t!x :n.zterial in support of thee rather parocLia1<br />
position I have presented. .‘as .L latter of fact, I found the strongest proponent<br />
of the view thsc canagcix;:t r1hlx5r:s to take seriously well done research in<br />
tha management field. I uim;C of criticism. Thus, if researchers :zant Co<br />
fcrl badly about their rolz il: :hri missing link bit, thev mtrclv have co turn to<br />
sources suck 3s tilt? . Arcricx3 . !‘.ius.ltion !n of the problem faced and the solutions reouired.<br />
Let me quotr Beman again:<br />
i1<br />
“If we don’t do this (providing the careful definition and<br />
_<br />
the solutions required) , if we continue to allow a situation<br />
to zsist where vn’l’ .Ire constantly wondering how much support<br />
WC are givirig RSD, then we are openly admitting that we as<br />
manapers ha.
.I’ ,<br />
So, you see, the solutlon to the problem is clearly in the hands of<br />
management.<br />
Now, being in research I cannot help but come forward with not one but<br />
several approaches for solution.<br />
The problem is not easy to solve: there are cany levels of competencies<br />
as well as many specialties in the research function just as, in production<br />
facility, one will find a variety of competencies and specialties within the<br />
work force. In the research function, for example, there are personnel who<br />
work well in fundamental research problems but who, perhaps, are Impractical<br />
In the broad field of applied research. The converse is axiomatic. The<br />
Department of Defense RDTLE structure speaks to such specialties in the functional<br />
terms of: -<br />
6.1 Fundamental research<br />
6.2 Exploratory development<br />
6.3 Advanced development<br />
6.4 Engineering development<br />
6.5 ?lanagement and support, and<br />
6.6 Operational Systems development,<br />
recognizing the unique facets of research and development. It seems to me<br />
that management should recognize, if not the categories, &en certainly the<br />
functions of .the several types of research programs.<br />
Glenn Bryan, in a published article entitled “The Role of Basic Research<br />
in the Total RLD Process”, in the January.1973 issue of Research Yanagement,<br />
conceptualizes a series of models which address the enigma of placement of the<br />
various types of research into an organization.<br />
ALlong the node18 arc several organizational ones with which we are<br />
familiar. First, he depicts.3 “Linear Production Xodel” which, for my<br />
purposes today, adapts the DOD research functions into a production scheme.<br />
!<br />
. ’<br />
105<br />
3<br />
.<br />
- _--- -<br />
_* --.<br />
-_.<br />
:<br />
.
You can see, the block to the right draws upon and feeds responses to the<br />
several research functions to the left. This situation is somewhat idealized.<br />
It appears in the straightforward approach that flow to and from the blocks<br />
is continuous, unimpeded, and simple. It would be ‘nice were that so.<br />
Another model of Bryan’s is the “Departmental llode1.”<br />
' I*<br />
7---<br />
I .--<br />
2;<br />
; .? 7<br />
I<br />
, ;---I<br />
, ‘L .r- -*<br />
:._ . . . . . ,*.s.. ' _. .* '-~.-,J<br />
.<br />
* _<br />
. . . . .<br />
: __.<br />
This model has decentralized the research function and superior authoritv<br />
which Bryan calls “?fanagement” in the model takes total responsibility for<br />
coordinating the several research functions and moderating communication among<br />
them. Bryan states that such a system as depicted in the model requires<br />
superior technical competence at the management level to function effectively.<br />
I might also add that it will take superior managerial conpetence to keep such<br />
a conpetitive system functioning toward the overall goals of management.<br />
Another model is the “Project Yodel” which has had muc‘n usage throughout<br />
the Defense establishment.<br />
The project manager is supreme in this model. He buys. what he needs<br />
and what he needs is largely a function of his perception in achieving project<br />
goals. Bryan points out that such a structure has both good and bad<br />
-..<br />
I<br />
/ ’<br />
106<br />
-. I<br />
.*<br />
, .<br />
,a’<br />
_. . .<br />
,<br />
- -.<br />
-<br />
. .
merits. Inasmuch as the highly focused research efforts, that are in line<br />
with project goals, are often well supported since, they are usually an adjunct<br />
to a high-visibility, large project, that is good: The drawback, of course,<br />
is that there is little general support for research not directly in evidence<br />
of project goals. Research runs the constant riSk of being a necessary evil<br />
in this model.<br />
I won't shot: you Bryan's "Organic Yodel." There is little need to, for<br />
consider a tree, if you willi'laden with lush fruits. Then consider its root<br />
structure, seemingly dis0rganiz.d. ugly, and dirty. Then consider the roots<br />
as what they are, the source of the hcalthv fruit: the roots that are often<br />
misunderstood as to their function and their problems. That the roots can<br />
isolate the proper nutrients that are necessarv for effective fruit production<br />
is without question. Also without question is the fact that indiscriminate<br />
root pruning will almost certainly lave adverse effect upon the fruit.<br />
Bryan's tree is more an analogy than a model. It is too simple to portray<br />
in any real sense of the world what the interaction between management and<br />
research is: it merely portrays what it should be.<br />
And this, perhaps, is the reason that we live In a world of linears,<br />
departmental, and project models--variations, perhaps, of those described.<br />
X reasoned approach by management is necessary to bring about precisely that<br />
fine-grained relationship of mutual respect withfn organizations among its<br />
major functionaries. The cartoon she-zing the manager in his plush office<br />
speaking to a person in a laboratory coat, and bearing the caption: "You<br />
researchers--make some kind of a breakthrough!" has got to cease being funny.<br />
,<br />
It would seem to rnz that the underslanding of the RhD function, the<br />
placement of that function within the dy; amics of the organization, and the<br />
understanding by management that reasoned direction is reouired will bring<br />
the closure we all seek and elim!nate thy need to worry about the translation<br />
problems between training research and training actions.<br />
I<br />
. , .<br />
107
108
Prr<br />
-. . Kerr sOg;er,ts what I perceive t3 be the !*cnl ke.v to<br />
ixvine renen 1-c t: PI-oduct? from tf!e 1-L~~-3tc:*y.~ -. .J_ to the<br />
classraoa, an-l thrtt i s concrehensive trainI 7-16: !aesearch<br />
plans and requi-ements. I beliel/e t3e mzn?l:Crs’ susnicion,<br />
reluctance, and resistance xay freq::entlv be<br />
traced to research conceived in the isolatT,:w of a<br />
laboratory 3n.i aubxesxentl:,, offere% to solve :I training<br />
p:-obiem. Typically, t?lis -e&ire? tI1? Oril-in:\l research<br />
y:Boduct. to he ::;@3ified, a-fjuse,e3, ?.n? exn?? !ei:, with<br />
p:*cJictable results. It is !i>t sur?risinc th?t hook<br />
tkelves a r e loaded with researc!? rzgorts neuc:* npnlied.<br />
I :-I many instances, I suspect it is hecause they were not<br />
. 3”ci:7ncJ _ ._I in re::ponse to st,:te,i oper*ationrl? I’rvh le.ms .<br />
Alone the same lines bLit sxewhat rfeeger T f+t>l that<br />
r.sjor zodifications must be zade to zain’cr7?T*.linated<br />
aznagexent of the multiple 3X rese??ch nr-t?ncles.<br />
Current ly 6.1, 6.2, 3rd 5.3 efforts zre ot’ttln lnit iated<br />
a’<br />
,<br />
109<br />
-<br />
-:<br />
.<br />
I-<br />
.I
.<br />
i I<br />
/<br />
Without cross-validation or coordination k:it?l t$aj.ning<br />
managers. It is not unheard of to ha.re a contractor,<br />
whose unsolicited proposal is funded with 6.1 monies,<br />
arrive at a military instaliaticn without announcement<br />
or coordination. Similarly, there are examolesiof<br />
duplicated efforts while topics neediny: exylora,,torv<br />
research remain untouched. .4t this time the nl'litary<br />
services are open to criticism in thzsarea. ?er:;nn:lly<br />
I believe that no research, re.gar1less of tunctlonzl<br />
category 6.1, 6.2, or 6.3 shm$,ci be undertaken unless<br />
it supports stated user requirements. Recently an<br />
Interservice Training Review Fcard has been established<br />
which could make si::nifican: contributions in this area.<br />
!5?3ile I agree with Dr. Kerr that a trainin?: research<br />
requirement document is an essential Item, I don't believe<br />
thst It is solely a riansqenent resnonsibility. Ideally<br />
th.is document would be develope:? in coordination with<br />
training researchers. Carried a step further, it could<br />
be a matrix consisting: of a taxonomy of sirnlficant<br />
training research areas on one axin ?nd researc?] a~::encZes<br />
on t!le other. \!hen 0ny;oinr research efforts are Flatted,<br />
those areas not being addressed beccne rendlly arnarent.<br />
At this point, I fully endorse the close coordination<br />
between researcher, nanaqer, and trainerl??resente? !‘.‘I<br />
Mr . McDowell. Sever-n!. points of his presentation vrtrrant<br />
reemphasis: A meet ink; between researcher and tr.aininir<br />
manager shouid be held prior to initiation of the **~cp3nch;<br />
-...--r-<br />
the statement of work shou?d he soor3inatei an2 ztnprcved;<br />
proposed research products should be revhewed and'approved<br />
as developed; and close interaction should !le maintained<br />
as the project progresses. l)is -.I coordinated spesrhesd<br />
will open a pathway through nanaqement's anti-research<br />
field of sharpened fence posts ,fescribed by Dr. Kerr.<br />
Another area of general arreenent a!?.on~ he presenters<br />
is the problem of placement of R&D acti .ities in the<br />
organizational structure. Dr. Xayo suiT.ests 1 the RhD<br />
group should report directl:: to the technical di:lector<br />
or commanding officer. I ccncur in this suecestion.<br />
flacement in the upper echelons of ti;e,oxan~zation~l<br />
structure is critical since it dictate,<br />
d the interface<br />
between the traininrzr nanaqer an,l researcher. In aJdition,<br />
it eases access to top decision makers which is essential<br />
f-0 r support of research projects requiring major t.xDenAitures<br />
of resources or hnvinc significant impact cn the<br />
training activity.<br />
.<br />
110 .<br />
;
4.<br />
-<br />
,<br />
i3ecause of several unique aspects a brief description of.<br />
Air Training Command’s RS3 xrgsnizational structure might<br />
be of interest to this audience. Tt should be understood<br />
that the responsfbility for perronne? research in the<br />
Air Porte has been assJgr:sd to the Iiuman Resources<br />
Laboratory, Air Force .?.vstens Co.zmand. Research requirements<br />
from each InZIJC!q command are fork!arded to them for<br />
accoa~lis?.ment .<br />
S;s:nek::hat znu;u:il 1 :: the fact that tke 43 ;,TC R&D orsanizstic,n<br />
is locs’;e.-l under r!:e 2cput;z C::ief .>:” Staff fj:-<br />
Plans rather !!;?+-I , ur. ,? er t:?e .u Y$ v. -“or OCCh:!: eal Training<br />
3,” DCS far Operntio!:s, xI;ic!. int-,,lves f1 A 5 *-s.,, .-:-t and navigator<br />
training. ‘ft:r, placemen: re:mitZ ClOSf? lntcrscticn between<br />
fu*.Ire ccr;.mand programs and R&C requirements, permits more<br />
conorehensive scnnni:ir; of state-cf-the-krt technology than<br />
riigj-.t be rassi5lc under 22 agcnc*; responsi!yle f o r specific<br />
train;nc, 3rd provides 3ccesI: tc the commkncier.<br />
Anctt7er u::ique ?spcct 1s T!le pcrz~cr~zel rm;ce-UD o f t!le Hq<br />
RS3 staff, xi::ch cona:sts of ‘i ccmputer sreciallnt , two<br />
r** ‘70t-fli~!;t s:mula:.c:* sreci?lists, a nn:‘l,-stor simulator<br />
specialist, 82 n.uJio-:ri31;a1 s?cci2list, an? two bc!:avioral<br />
scientist::. ?3Ch specialist ?s. c!-Larged to interact wit:h<br />
3CldeZiC, clvilinn, and cilitsry 2.Gencies 3evelopir.g<br />
products or con~ucti r.? research in 95s arc? of resronsi-<br />
5i:it.v. Infc:rxt ion concerninc ne::’ tcc!:nc.lnpie.s W+h<br />
potential use in the cornman? !: C>CrAiRat~3 iiith thiappropriate<br />
DC,” and/or fcr::lardeci >o the arcrooriate<br />
trsining 32t tvity for cccrdination. Khere interest is<br />
s!~oxn srranqements are na5e for *:Isitations to the research<br />
s:;ency or !-rizfinc of ?ccroDriete staff rcxonnel by the<br />
researcher. ~2ch spez iaxs t is ;t 1 s 0 respcxible for<br />
in:tiatin[; Requests for F’ersonncl Research in his area of<br />
interest to be conducted 5:: the i-iunan Resowces Laboratorx.<br />
Trainin< Research Applications Rrs!:chcs or ‘?RARs have<br />
been established at esch Technicail Trainin? Center,<br />
monitored by the 112 R&D group. The TRAR works for the<br />
sckcol opext iox office? and has iccess to the SC~CO~<br />
cczmncln?er. E 3 c t; CR,A9 has a clvil:an CS-11 educational<br />
spec4alist snd txo r3iLita:y: behavlorsl scie::tists.<br />
These organiac?t !onn’ nerrora ql.iick reaction studies for<br />
tra.ln1r.q mzi!191c:‘:: Cbtl+UCt CI~pliCSTiOn stulies xith<br />
‘,rsinirq re:eiirchirs 23 ,?irectc,d 5:: t h e I!? R%D group,<br />
an;t Znitiatc resc:1rc!; requests to be acconclished by !!RL.<br />
111<br />
. --.-. ----
, ,<br />
. .,<br />
\: ; -*<br />
All TRAB projects and research requests are coJlt??'lled<br />
by the HQ R&D group to prevent duplicated effort.<br />
A Research Review Board made up of the technical directors<br />
from <strong>Technical</strong> Training, Flleht Training, and Plans<br />
has been established to validate command research requests,<br />
In addition, they monitcr the action taken on recommendations<br />
resulting frcn completed research studies. The<br />
Research Review Ro,zr,1 responds directly to the Chief of<br />
Staff and the Vice Commander.<br />
The final area of arreement mentione\d by our presentors<br />
deals Fith t:he :'cqui:'ecent fOI> cc7zunicatlnns between<br />
reseclrcher an-1 ti*2i::in^: mzr.?g:er. If a research plan ir;<br />
developed an2 t?ere In close cocrdination dulling: the<br />
initiation, coWfAct, and re?ortinF; of research, communications<br />
proS1em.s are rarely: experienced.<br />
I hsve come to sevt~~1 ccnclusicns regardin< communications<br />
during-: the past three years of coordinating the<br />
interaction het3een researchers 2nd high ec!lelon decision<br />
makers. Some of m.v \--onclu sions concur and ot?lcr:; differ<br />
from the positions taken tcrinv bv our speakers. MY<br />
remar'ks reflect ny Fersonol opinions only and are specifically<br />
directel tox?rd t!:e interaction between the<br />
researcher and highest level decision makers.<br />
Researchers are their own :.:orst enemnies in the transition<br />
of research Froducts from 'vhe laboratory to the training<br />
environment. Speci flcEilly :<br />
a. As a group t?ley are particularly poor briefcrs.<br />
They generally arr:ve preysr ed to deliver a three-hour<br />
briefing to a person !*J??o has limited time for discussion.<br />
Their briefing format ususlly follo?Js that of an AFA<br />
journal article an.3 i:; usually stronc;ly statistically<br />
oriented. Althouc!l experts on their studies, they seldom<br />
address the critical aspects affecting: the decision, i.e.,<br />
the total resource impact of implementing their tJor!
. The publishing of selected bibliographies and<br />
reviews of literature may look great on t,,he vita but it<br />
tarnishes ths researcher's image among mdnagers. 7:<br />
shudder when these docur;r;lnts reach the headquarters<br />
because they invariably precipitate two qy,estions: “IS<br />
this what we are spending our money on?" an3 "What the<br />
hell an I supposed to do with this pile of junk?" This<br />
is particularly damaninq when expectations are high concerning<br />
a new project and this tvpe document arrives<br />
representin the initial work expended. Fly recommendation<br />
would be to ho12 the document and submit it at the conclusion<br />
of the project or incorrorate the entries with<br />
your interim reports.<br />
C. The tine-honored format of the tvoical research<br />
report may be justified for puciishinn ease and communications<br />
between professional scientists, but it is ill-<br />
.designed tc convey meaninL,- -'ul information to training<br />
managers. Arrain I ?:ould encourage you to consider your<br />
audience and write to meet their needs. Xinimize the<br />
discussion of your stat istical design and consider<br />
presenting this information as an appendix to the renort.<br />
I rcall:: reccmcend Arorrinr t!le Al??, rerzrt format and<br />
k;oing to esecuti;re s.umzary type reportin;T.<br />
d : An:! ?inall::, V;hen a fast reaction situation<br />
requires, stick your neck c*ut. If a decisicn has to be<br />
mde , don't su,z.qest you can provide a sclution in<br />
4 18 months. Use your knowledge of research data, and<br />
coce up wit:? viable options. Validate 2s vcu go and<br />
modif'y ,the approach+s required.<br />
i<br />
I<br />
.’<br />
I<br />
113
SYMPOSIUM<br />
$RANSLATIOII OF TRAINIXG RESEAXC3 IPiT TRA.I?IING ACTION - A HISSING LIMK<br />
EARL I. JOIIES.<br />
NAVY PERSOX1CL P.ES!XRCH .ZID GZVCLO?X::T CEtITZR<br />
SAX DIFGC, CXLIFOP.XIA 92152<br />
Having rc-ziexed Kith zrcat intezst the pazers of Hayc, KcDoxell and<br />
Kerr I shall attempt to summarize their main points, conrent upon them,<br />
and then as is a discusZant's prlvllege, * '* provide some comments of my own.<br />
Dr. Nayo, with whcm I have worked closely for the better part of<br />
two decades makes several major points. His first major point seems to<br />
me to be the rationale of this symposium. It is that a great amount of<br />
training research and development goes unused. he stipulates that<br />
failure to use R&D cannot be accounte? for by the simple assumption that<br />
risk taking is a fundamental part cf P. &D nor by the fact that some of<br />
the unused.training R6D is indeed unusable. He leaves out an important<br />
(but fortunately small) class, namely, training R&D that has been used<br />
but should not have been.<br />
The next major point in Dr. Mayo's paper is his reference to<br />
"linear change molels in education" with the stipulation that conditions<br />
of intrinsic or extrixic cot ivation must be functioning for the model<br />
to work.<br />
From here Dr. Mayo moves to the position that most of the conditions<br />
functioning to obviate use or transla:ion of training R&D are associated<br />
with (1) research personnel, (2) training personnel, or (3) the interface<br />
or interaction between research perscnnel and training personnel. With<br />
respect to research personnel Dr. Nayo perceives the major sin to be<br />
conceiving studies in which researchers look for training situations in<br />
which to execute but fail to consider (in the planning stage) the utility<br />
or implementability of the R&D results. As a solution to this unwholesome<br />
state of affairs Dr. :!ayo has the pragmatic notion that researchers<br />
should pre-state and pre-judge the training actions to be taken contingent<br />
upon the possible R&D outcomes and accept responsibility for such<br />
planning. I<br />
!<br />
.<br />
With reference to training perscnncl Dr. Kayo describes their work<br />
overload, their dislike of waiting forever for utilizable R&D results<br />
or recommendations, their commonly net identifying with the R&D task or<br />
project, and their feeling rhat whoever might get credit for research<br />
based action, they (the training pcrscnnel)'wiil not.<br />
.<br />
114 --<br />
.-.. . . ,_ _<br />
:<br />
._<br />
-:.* . -.f..<br />
.’<br />
’ , : a. . ./ \ . * *, . ’ ‘1, , ; q<br />
~z-“J
Dr. Mayo sees the interface or interaction between researchers and<br />
trainers to be the most critical problem. here he recognizes that the<br />
continuity from X&D to translation to change in training can be accomplished<br />
only by appropriate interaction in which researchers and their<br />
managers take the responsibility for the translation process and assist<br />
the training personnel in the a:tion process to the point where a;;rcpriate<br />
inplcncntation,has been accomplished.<br />
Mr. McDowell's presentation is a natural follow-on to Dr. b!ayo's.<br />
He perceives the requirement for proposing resea..-. m-5 as a soul searching<br />
responsibility for use of the taxpayer's dollar invokring the questlor.s<br />
of benefits gained from Fast !?'.tD and projected yield of future ?.i-3. he<br />
, perceives ttle t~'a::,ers' and maxt~cr~ :-ales as not limited to ;r-!nslatlon,<br />
but a process 3f bridging the ~au j . t;etween cznCi:c +i of research and im?lementation.<br />
This process: is continuous according io Xr. NcDowell and<br />
2equires constant interacrion between researcher and trainer free problem<br />
identification,<br />
implementation.<br />
XE.? planning, conduct cf X&D through translation and<br />
Mr. McDowell is not so prone as Dr. Xayo to perceive a vast graveyard<br />
of unused i%D. He qualifies his remarks in a way which implies<br />
that the Army may not suffer the "missing link" biimna to the extent<br />
that the iiavy does.<br />
Where Dr. ?-!ayo exhorts the need for trainer---csearchar interaction,<br />
. .<br />
Mr. McDowell talks of a system 11: beIn&. He refers to interacticn that<br />
"takes place;" data whic!i are "provided to the trainer;" reports which<br />
are' "reviewed and staffed with appropriate elements of the organizstion;"<br />
and specific examples of frcq.uent and essential in-olvenent of the rcsearch<br />
team in "active participation." He percc2vzs iise of F.&D byproducts<br />
as increasing, and dissemination of FLD findings among FotentiLl<br />
users as essential.<br />
In closing, Rr. McDowell excerpts from Dr. ?!cClelland's 1967 report<br />
eight characteristics of unsuccessful research efforts and five characteristics<br />
of successful research efforts.<br />
The final presentation by Dr. Norman Kerr is a shocker. in some<br />
communltics it might even seen blasphemous. However, in the long course<br />
of the history of education, the history of training and the history of<br />
tralnlng<br />
. .a<br />
OCD, A\v Dr. Kerr's remarks spark a flame of truth--albeit a truth<br />
so mixed wit!1 exception, complication and dark shadow that its exnlication<br />
is no slnplc'matter.<br />
Dr. Kerr's paper rtatas what the other pa?ers only imply. The<br />
problem may not be a missing link but a faulty chain anchored insecurely<br />
and subject to vectors of power and stress that make the task<br />
of translating and a??lying SD results in a positive manner unduly<br />
dil'ficult . cm.<br />
even ic~ossible.<br />
!15
Dr. Kerr's major poi::::: are that training, managers obvf,lte rather<br />
than facilitate RGD, t!la: :!;cy ~CSSCSS Dove r wherc3s rcsearc!s-s do nc:,<br />
and that the power disp.lrI: \' dt all 1evCls of in:c:raction betuaen<br />
training managers and traZ::i:>c researchers are tht* central f3cu.s for<br />
consideration in this s;?.Fo::ium.<br />
.<br />
Dr. Kerr carries this ar~uzcnt with<br />
conviction, verve, eloq::.~:::~ ‘inil numc?rJus cxampl~s. Althou;> hc provides<br />
a scorching indict-+::: of training nanaGcrs, Dr. Kerr does not<br />
back slowly out of the n*~.i:‘~t exit. iie finds to his disma;: that his<br />
review of research literii::::'c' does not provide the confirma:l>a he<br />
seeks Cut that canagemcnr lZ:iraturc dJes. x'ith sur:)ort : r 3.: rcscarcti<br />
nanagczent +litcratcre an,! ?;i:: Own kn~3.2led):e. of the KkD comm*:nZfv'zj<br />
nar.st;em~ent models, Dr. +er:' ; roviiies Some ?ositivc proposals :zr<br />
rapprochement.<br />
To my ear at least, tf:~ three pa,-ers xe have heard are es:remely<br />
use:ul accounts of the suz:~~-rs or failure of ail::ary orr,an::;i:zons to<br />
exploit RLD as a means c: u:t:rCIJing militar>- trai::ing. As XC might<br />
expect the ;apeFs strongly :~:l~t both the nature ar,d nurture of the<br />
problem and the histories .‘: the presenters. I, too, will ;:‘c to<br />
overccx tfiat dilerza Of ,.I::. OhS~t~c:'s--hOx tc F,et outside o:‘:y own<br />
skin h%ile 5eir.g securely 1*3:::;d within it.<br />
The does not per-it :z'-plotc coxent cn "li::t'~r change r.e>els;"<br />
suffice it to say that th.c.:: .kFe usc!‘ui devices whlc!l like :nr'st models<br />
have variozs constraints. Y?:c primary one is perhaps the r;~iti ior<br />
intrinsic or extrinsic mot1v.k:tion. tie:-c :*1r . :.:($p n'e 1 - ' ' cJ<br />
cnuticz.3.ry<br />
observation tht a decree t-y -~nagc:ncnt (one type c! extrinsic zot 1vc1- '<br />
tion) vi11 not get the job :!:::c is fair war-zinc a::J is echo& zn<br />
differer:t ways in both Dr. X;Clcllan.i's l&t and In Zr. Kerr's<br />
adnoni:ldns. Estrinsx t.0..Ivation is then sometimes necessary but<br />
seldm sufficient. ?!owcver , riven that one of the mctivaticn&l<br />
conditions is necessary, t!.~ ?roblcm becomes one of ccntrolliq: zhc<br />
direction of its outcome.<br />
Within the conter't cf T!:~s symposium, Dr. Kavo's point ',t;.rt tran::L.rtion<br />
of R&D into action dcg-i!t;.l% upon researc5 pcrscnncl, trarnlni; pcrsonnel<br />
and their interaction 2:: SO<br />
!<br />
strongly sc;?ortcd by WcDcwcll, Kerr<br />
116<br />
3
and logic that it neck little cX:Zcntary except to wonder why, when the<br />
problem c;\n be cast i::to SUCK sin;le terms, the solution to the problem<br />
has no: 2~3 tigo beer. routinely applied.<br />
Mr. IIcDowell’s y..lper 5s SO canv~ncinglp positive that it would<br />
seem, fcr the A:T.Y .\I Icast, the solution is at ban-’4.<br />
Shifting now tc .?r. t'.err prsviies a somewhat stark contrast. !ie<br />
ignites SC:;?: d con~1~~r~tizn that we are hea>ed with ashes and wonder<br />
as in the “Perils cf i‘zulizc” i: there can ever be a ha;py c::ding. fiy<br />
followi::< .':.. i&r:' i:::Y!:Cr, the :ncst obvious answer uculd appear to be<br />
a qualif ic,'. ";:+a'; ." :'..,;--.r.:..<br />
. . . . . - LC-&^-. Lik;tyit\-te p: to &\-axpcsltcve<br />
point, the need for rcrearc!: and<br />
nanagemcnt 3y.ir.L1:ft I:?.s tc hccctc an lcttgx-21 force for prcduct mXl-<br />
.<br />
provenent a;:3 3: .:-1z;xt I 'CT. . effic:cr.f.,*. Had Dr. Kerr’s search of research<br />
literature ttTe:i ;GZP*~~-.AT mere extcsive he xould have found a good deal<br />
of SuF~cr:. :t;c ~.-y-7.:‘:--~ a. A..k.“ cf yin? &, Ixmsdainc, Glsser, Rigncy, !4ackie, and<br />
even Earl Lz;.*s x::L,! not onlv he ionsoiing but might dispel his notions<br />
of contrite.<br />
Tf my IrxLq-ctation 05 these papers hss been ccrrect, I heartily<br />
endorse not only :&I:. ~.afc:- ;ti::l hut their eqlication of factors which<br />
obviate i~~3,e,ent~.tI0\;: of 2.53 into training action and their recorzendatiOX<br />
fo:* ;asitiLc s:c:,,s f;:- cat31~,zing, facilitating and improving the<br />
translation ‘and implementation process. Yet I think they hnvc not gone<br />
far enoqh. \Shat seerii to me to bc missing arc n set of models for training<br />
mansgcricnt , K&D management and a mcta model, superordinate to training and<br />
K&D, cshic!i c0uI.i gnidc both the training community and the H&D community in<br />
the spirit of intcgrckt ion, establish a11 the ncccssa~ intcrfnccs at all<br />
le\pels of function In both training and training CF,D and proi’idc such mechanisms<br />
as a Training Rcsenrch A,!risory Board and 1aSnrntory schools--in short, a model<br />
which cou1J faci:it;itc the kin& of interactions all three of our speakers have<br />
proposed in their o::n unique styles.<br />
117<br />
,
TEST FEEDBACK AND TRdlNINC 0EiJECT:)‘ES<br />
BY<br />
CARROLL H. FREE&E, LTJC USCGR<br />
UNITED STfiTES COAST GUARD TRAINING CEHTER<br />
b<br />
CAPE HAY, N. J.<br />
/<br />
Note : The views expressed in this paper are those of the writer -<br />
and do not necessarily reflect those of the USCG Trainicg<br />
Center, Cape !tay, N.J.<br />
118
INTRODUCT IO11<br />
The public seems to be generally willing to accept a decrease in<br />
the manpower of the armd services, As a result of this acceptance the<br />
active draft system has been deleted as a method of acquiring service-<br />
men. Whether this acceptance is warranted remains to be decided however,<br />
and in the meantime this decrease in manpader is something with which<br />
the mi I itary forces must deal. Ihy must we deal with it? Decausc, in<br />
the event that military or naval forces must be called to action on<br />
short not ice, the American people wi II expect and demand the same<br />
readiness and response uhich has always beer. cha.-acteristic of the<br />
armed forces. An uneffectivs response at such a time could cause a<br />
breakdown in one oc all of the systems which oefend nut N.st ional<br />
Security. It is apparent then that,irr the face of major sutbscks in<br />
the quality and quantity e>F individuals joining the .~nnc~I forces, WC<br />
myst continue to maintain 3n effective state of military readiness<br />
which is responsive to situation derunds.<br />
Hany tires we lu:u track of the overall importance *IF our jobs<br />
when WC become involved ii) the ,,-ts of everyday bt~sItx5s. tioweve r<br />
such large problems as th.)t mentioned .bb?\.- are not r-cnudicd by one or<br />
even several decisions or changes. The overall situation, in this<br />
case is Jffcc;ted by many ~1al1 varidblcs Jnd ‘utiun to it will<br />
be dependent upon many SKI I 1 ad jus cmcnt s or changes .hich will in<br />
turn have an overall efiee:t.<br />
The situaticn with which this study ~!c;~ls is one of those small<br />
‘4<br />
. . -<br />
119<br />
__. ____ . . . . . . -<br />
3<br />
.<br />
,. ’ -.<br />
-. . .. . . .
areas in which changes can be made-which will affect the ability and<br />
readiness of the men presently being trained by our armed forces.<br />
To increase the abilities of our servicemen in their various<br />
special ties, as well as in the general knowledge of those things which<br />
pertain to the mi 1 i tary forces,, is an objective which will help<br />
accomplis? the readiness which the public expects. This can be<br />
achieved even in the face of lower en1 istment rates and manpower<br />
cutbacks. In increasing retention and usage of information given to<br />
trainees we can project that this ability will enable them to operate<br />
at an increased level of awareness, and efficency.<br />
At the Coast Guard Training Center in Cape May New Jersey we often<br />
encourage instructors to give information to recruits in regard to per-<br />
formance both on the’ positive as well as negative side. This feedback<br />
is what most people base their evaluations of their abilities in any<br />
particular area on.<br />
In an early experiment on feedback Hut-lock (1325) emphasized<br />
encouragement or incentives and their effect on learning. He also<br />
encoura,ged the use of verbal praise or criticism and added power to the<br />
preference for verbal as opposed to other types of feedback. Plowman<br />
and Stroud (1942) showed that seeing their errors corrected on an<br />
examination allowed high school students to eliminate approximately 50%<br />
of their errors on a retest one week later. This result was probably<br />
due a great deal to the practice effect but does point tm :he value of<br />
feedback in enhancing ability. More recently Page (1958) reported<br />
c<br />
120<br />
-2 -.-. . ._<br />
. ‘. - ‘-“ .-. -<br />
.,-. __<br />
.
’<br />
I<br />
‘* ; ,.<br />
‘.<br />
I I<br />
results which bear out Hurlock’s earlier emphasis on praise and blame.<br />
He found that students who received comments on examination papers did<br />
better on the next examination then did students 90 received no comnents.<br />
/j<br />
Ii<br />
Huch of the previous research in this area has been conducted<br />
comparing feedback with no feedback situations.<br />
: :<br />
Curtis and Wood .(1929)<br />
hwever’, tested four methods of scoring examinations and found that<br />
those methods which provided for discussion of questions were the most<br />
effective. Stone (1955) found that the effect of feedback was correlated<br />
positively with the amount of information contained in the feedback.<br />
Students who received full explanat ion,as to why one answer was wrong<br />
and another was correct, did much better subsequently than did those who<br />
received less, or no, information. Sassenrath and Carverick (1965)<br />
compared four treatment groups for retention and transfer of learning<br />
and concluded that the type of feedback i!s not nearly as important as<br />
the fact that the group%ets it. In the ‘present experiment it was<br />
I<br />
hypothesized that the introduction of weekly quizzes prior to the first<br />
progress test in recruit training would &crease scores on the progress<br />
test and this increase would vary positi Jely in accordance with the<br />
I<br />
amount of feedback given on these quiz&.<br />
P<br />
APPARATUS Quizzes consisting of twenty five questions each, base; 31‘1<br />
the first and second week subject matter were constructed. Questions<br />
covered the same subject areas as those cn the third week progress test<br />
but were changed to appear in a slightly different form.<br />
‘.<br />
121
All quizzes were given in well lighted classrooms under<br />
comfortable conditions.<br />
SUBJECTS and DESIGN Five hundred and sixty eight Coast Guard rrcruits<br />
who entered the service between 15 April and 15 August 1373 served<br />
as subjects in this experiment. They were assigned to companies,<br />
apprcxiaately fifty men each, as they reported for duty. The varying<br />
numbers in each co,:gany caused the number of subjects in each treatment<br />
group to fluctuate. As each company of men began training they were<br />
assigned a treatment category, 1, 2, 3, or 4. This continued unti 1 four<br />
companies had gone through treatment 1, four companies had treatment<br />
2, five companies had treatment 3, and four companies had treatment 4.<br />
The total number of subjects in each group was as follows.<br />
Treatment #1 - IL5<br />
Treatment $2 - 134<br />
Treatment k’3 - 170<br />
Treatment 14 - llg<br />
568 = N<br />
Subjects were therefore assigned as randomly as possible to treatment<br />
groups . Four instructors took turns administering the three treatments<br />
to help control for the effect of any particular instructor personality.<br />
PROCEDURE Prior to this time no quizzes were given to recruits other<br />
than the third and sixth w&k progress tests which consist of 50<br />
I<br />
mu!tiple choice questions on material covered in lecture, course material<br />
and text. Quizzes were introduced at the end of weeks one and two prior<br />
to the first progress test. Companies received their assigned feedback<br />
/<br />
.<br />
.*<br />
.<br />
,,<br />
:<br />
’ , : I I :<br />
, ,<br />
122
condit<br />
“first<br />
ion over both qu izzes and were afterwards tested using the standard<br />
progress test”. The treatments for each group are specified below.<br />
Recruits were told that quiz grades would not be a part of their record<br />
and were for their information only.<br />
TREATMENT #l<br />
After the quiz each man kept his paper and the answers were reviewed.<br />
When no questions came up in regard to an item the instructor was told to<br />
explain the answer anyway. Recruits were encouraged to ask questions and<br />
discuss each one fully.<br />
TREATMENT 4’2<br />
The companies which took part in this treatment were told which<br />
choice was correct in each case, but were not given any further information<br />
or allowed to question it. Thus they received some feedback but not the<br />
amount available to treatment group one.<br />
TREATNENT .$3<br />
Since we were introducing two independent variables in the feedback<br />
and the quiz itself, it was necessary to be able to say wether change<br />
was due to the feedback or merely the introduction of the quiz. It was<br />
for this reason that group three received only the quiz and no information<br />
as to how well they did on it. Instructors required that answer sheets<br />
be turned in upon completion and gave no information on correct answers.<br />
Subjects received no feedback other than how they felt they had done on<br />
the quit.<br />
TREATMENT $4<br />
The control groups proceeded through training as other companies had<br />
until this time, with no quizzes until the progress Lest at the end of<br />
the third week of training.<br />
123
ESULTS<br />
A Kruskal - Wallis one way analysis of variance ylelded an Hc value<br />
0<br />
significant at above the .OS level. Group comparison’s, again using the<br />
i’<br />
Kruskal-Wallis formula, were made which indicated significant differences<br />
between several of the specific groups.<br />
!!<br />
INSERT FIGURE 1 HERE<br />
These results imply that a definite difference existed between the<br />
group which received maximum feedback from the quizzes and both the<br />
. control and the quiz - no feedback group. This difference bears out<br />
the hypothesis and was presumably due to the effects of feedback. The<br />
max feedback group ‘did not however, differ significantly from the<br />
mlnimim, feedback group (group I I).<br />
I<br />
In this respect the increase in<br />
I<br />
feedback did not cause a siTnificant difference from the lesser feed-<br />
I<br />
back level and.fails to bear out the finer implications of the<br />
hypothesis. The trend was toward an increase in feedback predicting<br />
I<br />
increase in performance, but in this case it,did not approach<br />
significance.<br />
OlSCUSSlON<br />
The results of this experiment bear out the hypothesis in regard<br />
//<br />
to feedback versus no feedback and seem to confirm a statement made -<br />
previously (Sassenrath and Garveri ck, 1965) that the type or amount of<br />
feedback Is not of extreme importance in these situations. The<br />
important thing is that there is some feedback. The data shows a<br />
trend, although not always at significant levels, that as the amount<br />
of feedback increases so does performance.<br />
124<br />
.-<br />
I<br />
1<br />
I<br />
b.<br />
..I<br />
.i, :I<br />
ii
Instructors noted that throughout the experiment the groups which<br />
received max feedback showed a good deal of interest in other areas of<br />
training and asked questions which concerned more than just the quiz.<br />
One Important factor in this case seems to be that of human interaction;<br />
the chance to speak with, or question, someone higher up in the chain<br />
of corrmand. This interaction was seen to have an effect in terms of<br />
hefghtened interest in many areas of military life. These companies<br />
received the treatments twice over a two week period and the question<br />
rrops up wether the difference in group one’s performance over that of<br />
three and four was due to the educational aspects of feedback or the<br />
psychological aspect of interaction with someone of higher rank on a<br />
semi informal basis. Both factors probably affected the results but it<br />
would be interesting to see wether the feedback WJS valuable because of<br />
its information or because-of the human interaction.<br />
A certain type of practice effect probably occurred also in that a<br />
majority of these men indicate a fear or apprehension concerning test<br />
taking. Allowing them to take two quizzes prior to the progress test<br />
prcbably helped el iminate some of the anxiety normal ly surrounding<br />
this test and present in group four.<br />
Due to the nature of the environment subjects in this experiment<br />
could not be assigned to treatment groups on a completely random basis.<br />
There is a fluctuation *n recruit education level that occurs at various<br />
times throughout the year,and it is possible that this fluctuation may<br />
have had some effect dn resui ts. However companies were assigned a<br />
treatment group in sequence as they entered training and this fact should<br />
have ramdomized the effects of the seasonal fluctuations.<br />
Whatever the reasons for the effects. of feedback in various situations,<br />
’ ,<br />
.- _-<br />
. ----. . ‘.<br />
.-.. .<br />
., ”<br />
. -. . -.
we know that it does enhance performance and increases retention of<br />
information to a recognizable extent. This realization should be of<br />
prime importance in developing any training program and should incoroorate<br />
the use of human interaction in increasing a pe‘rsons interest and<br />
knmledge in their chosen area of service. Feedback allows individuals<br />
to assume their Identity as humans with individual scores, performances,<br />
abflities and characteristics. Feedback is the basis of how we feel<br />
about ourselves and we rely on it to continue shaping our ideas. The<br />
more of it we get, the better we are able to evaluate our abilities.<br />
in a society which deals in numbers, feedback is a chance to ccnserve<br />
our identity and enjoy our work. As such it should be at the heart of<br />
any training program.<br />
126
RESULTS<br />
OVERALL SIGNIFICANCE AT BETTER THAN .05 LEVEL.<br />
REQUIRED 7.82 OBTAINED H = 8.19<br />
AFTER THE CORRECT ION ” Hc = 8.21<br />
GROUP COHPARI SONS<br />
H<br />
I vs II LESS THAN .05 LEVEL - 2.51<br />
I vs Ill HIGHER THAN .Ol LEVEL- 10.15<br />
I vs IV HIGHER THAN .Ol LEVEL- 23.34<br />
*<br />
,<br />
I<br />
/<br />
FIGURE I<br />
i I<br />
/<br />
I<br />
I
Ii<br />
1.<br />
2.<br />
3.<br />
4.<br />
5.<br />
6.<br />
BIBLIOGRAPHY<br />
CURTIS, F.D. and WOOD, G.G., A study of the relative teaching value<br />
Of four common practices in correcting examination papers, School<br />
Review 1929, 37, 615-23<br />
Hurlock, E.B., An evaluation of certain incentives used in school<br />
work, Journal of Educational Psychology. 1925, 16, 145-49<br />
Page, E.B., Teacher conments and student performance; Journal of<br />
Educational Psychology. 1958, 49, 173-81<br />
Plowman, L. and Stroud, J.B:, Effects of informing pupiis of the<br />
consequences of their responses to objective test questions. Journal<br />
of Educational Research, 1942, 36, 16-20<br />
Sassenrath, J.M, and Garverick, C.M., Effects of differental feedback<br />
C*om examinations on retention and transfer. Journal of<br />
Educational Psychology, 1965, 56, 259-63<br />
Stone, G.R. The training function of examinations. Research report<br />
No, AFPT RC-TN-55-8, 1955. USAF Personnel training research center,<br />
Lack1 and Ai r Force Base.<br />
128
*'CwaE**: An autonated testind svstem for the evaluation<br />
of abilitv and/or wi I linaness to orofit fron a<br />
trainino situation<br />
David B. Yinson, Ph.?.<br />
1719 “‘edical Towers RIdrl,<br />
Ifouston Texas 770?5<br />
I n 1?7Q, CO?,a conouter software nrooram cnoable of<br />
assessi na the effect of cerebral concussion on connitive<br />
function uas reported. Usino renression analvsis. Coo<br />
coTbared baseline data with post- iniurv cerforpance data<br />
to calculate the arohahilitv of inpaired coanitive functicn<br />
associated with cerebral concussion.<br />
Pui ,di:.- 3r: exr?erience I?;nnd w i t h coo, a coflbutpr<br />
sc!tware proflr8m ?dq?F WAS de\‘elpPed to evaluate cert.?in<br />
cocnitive ?nd bersonaiitv functions, and from the interaction<br />
of these functions as specified ht; al~,orithms, tr,<br />
credict t h e cubiect’s- a b i l i f v ahd/or wiliinnness t o profit<br />
frcm a trainin bra,cral7.<br />
I n t h e DoO?i svsten. nerfnrmanta d a t a a r e acouired hv<br />
the administration C! tests to assess selective attention,<br />
short-term nemorv, rate nf inductive reasoninn, sneed snd<br />
accuracv of continuous addition, level o f asniration. abilitv<br />
to read, and mental attitudes. Certain bioaraohical data are<br />
inotit t o t h e svster:. but sex and culture are not comnuted.<br />
!dritten i n Fortran I V , the rWnnF software uses arnroximately<br />
27 K characters with two over lavs of I I I/ and eK<br />
resoectivelv. Ifsing a n XDS-ndn i n tine-share, nerformance<br />
data are innut and conoared with stored information to<br />
generate a renort which auantitativelv nrades a subioct on<br />
five factors nreviouslv identi ficd hv factor nnalvsls as<br />
predictive of sunarvisor ratir2s of ioh nerfornance.<br />
The five factors as defined in the DWflnF nrint-out<br />
reports are mental craso. self-confidence, cooneretiveness.<br />
emotional stabilitv, and leadershin. nQoF?F nradinns a r e<br />
from I.0 t o 5.0, reDorted to one decimal ncint. A. arndinn<br />
of 3.0 represents the mean of hinh nerforminr! normal sub;ects.<br />
PCT?F has been on-line since ln7l. ‘ran receipt of data, +o<br />
user’s receipt of the renort. the turn around time is less<br />
than ten minutes ner sub iect, nnvwhere in ‘I !rth 4merica.<br />
. I \<br />
. 2 --<br />
.<br />
_..*.-:7-~,/,. : *<br />
i<br />
8 . . , !<br />
,. ..-.<br />
-. .<br />
. ./<br />
_ .<br />
1<br />
,
-L - -<br />
t. -. .‘.. .)<br />
--<br />
.’ -a- - ..:_ .<br />
-.<br />
; i<br />
130<br />
)<br />
* .
Althouch suoervisor’s ratinos pf work parfornancc are<br />
aenera I I y acknow I edged to be tr i a::*‘,! toward t5e mean. nrsdinds<br />
of profess iona I fcJrba I I csaches .?r-x not. \?>aches’ nrnd inns<br />
cf certain c?serva? I e behav i.-r aqs,-: i ated (I i t h on-the- ieb<br />
oerfcrmance are as shown in iilbIP I .<br />
T ,A I: :. i‘ I<br />
-c A’.’<br />
- LEJ.3.i 1 >I*? ;nyr 1 pF’!rE rf- ?a,,‘.::. :: 1 L IT:’ .?cccrss 1 \‘rqr$$<br />
.<br />
the agreement is as shown in Table ?.<br />
Conf i-<br />
dcnce<br />
. .<br />
,<br />
.i \<br />
t; \->ness i i itv shin<br />
: - -
I<br />
Knew I edce of -31<br />
Product i s i ty .Jl<br />
P r i d e ir: X’ork -5<br />
Leerni r.c ?ate -7s<br />
Over3 I<br />
. ! ,?<br />
‘.‘e n t A 1 COnii- conncr.t- c t?! - !.cader-<br />
,<br />
.<br />
i<br />
. ,- --..<br />
: ,:<br />
’<br />
._<br />
.--I- _. -*. __. . . . . .<br />
_. _ _ _ ._ .._ - - ._ . . . __j. .._ . ,- ^_. ,<br />
. . . . i;<br />
h<br />
Q’,’<br />
\<br />
\<br />
,’ ,/.<br />
. -<br />
.’ \<br />
‘.. .).<br />
‘guyJ’J<br />
I<br />
: ,<br />
‘S.<br />
: ‘.<br />
: I
.<br />
STUD\’ ?: Performance dat,l were acouirct from 855 emnlcrvee<br />
candidates of U.S.Steel (Texas \\lt)rks). nn each s u b iect, a n<br />
earlv model af PROF?F: w a s nroces~:cd. These r?* and co-nuted data<br />
.n.?re forwarccd .to t h e !‘enartrcnt f S t a t i s t its, Texac- 4 P. ‘1<br />
L’niversitv. nn t h e same suhiects. t’.S.Stcel provide;. io5 a n d<br />
training data; :l.S.SfPtxl asset-e(! Texas A R ” “niversitv &hat<br />
e~lriovce can: i dates were ne i t her hired ncr not-hired cn t h e<br />
-asis o f =?;\?:E renor:s. T h e :WC +ets o f data (PCPPF a n d<br />
1 .S.Steel’) itere kenf entlrelv t.c‘n.jra+c u n t i I ~bev reached Texas<br />
-4 ;, ” IJni\ersitv f o r ,tn.?l\,si-,. II.S.SteP1 da’? consisted of<br />
f C’ V t’ F .3 1 rat I ncs nerf?r!-e.! b,J qurt\r-v i sors, ‘? r .- certain data-nfrcc,-rd<br />
o n birs?/not Cir;r, ?t.bl? 6.<br />
r ,7 i: I. F 6<br />
. c:‘P.I f,TS c-i: 19F r:YF~: “TC II.S.CTF~L n4?-l?!r.S Or:<br />
: I I<br />
;I I<br />
“enta I Cf. I!:?<br />
:,‘onfcrmi tv<br />
133<br />
b!?rL ?erform,?nce.<br />
C.cnr c s?+ctv rules.<br />
Fs70r-?ra? i vcness P. confnrmi tv.<br />
fnotl-nai stabi I i f v .<br />
Auth-r itarinn leadcrshir.<br />
I ndustr i.a I enainecrina ratinn.<br />
Tines chanced denartmenir.<br />
Clus*er nlacement.<br />
ncce-~ L ancc overt ire.<br />
.I\ccrntanrc assinnnents.<br />
11nre31isticatIv hirlh, oonr c l u s t e r<br />
-lacemcnt.<br />
Ilnre?l istical Iv hirlh. m o r n<br />
Ihscntceisn.<br />
tlnrP?!i5?ical Iv hioh, n(Yrcr wof c<br />
>erfnrnance.<br />
!1nr0a ! isticnllv hioh, poerer<br />
?ccentance assianments.<br />
IearninG.<br />
Hirt*:/L’r)t 1’ i red.<br />
Fmo+ihnal sta5ilitv.<br />
t .clor er.3 .+ivrness i conformitv.<br />
4bsenteeism.
Clerical-technical Al I narsmeters<br />
er<br />
er<br />
et<br />
cat-technical<br />
Cal-technical<br />
Cal-technical<br />
Cl,~rical-technical<br />
Production, Latin<br />
Pr-ducticn, Latin<br />
Production, Latin<br />
Production, Latin<br />
Production, Latin<br />
Production, White<br />
Production, !chite<br />
Production, b!hite<br />
Production, !“hite<br />
Production, :Jeoro<br />
Product ion, t!eOro<br />
\lental *Irasn<br />
\<br />
Conforri tv<br />
Confidence<br />
COnneti*ivcness<br />
Coonerat i vcness<br />
A I I naramnpter-s<br />
?tenta I ?-rasn<br />
CoODera+i veness<br />
COnf idcnce<br />
“enta I rrasn<br />
1 3 4<br />
Hired/tJot Hired.<br />
Non-authoritarian leader-shin.<br />
Doorer PgCIRE. mOre likelv hired.<br />
(men Onlv).<br />
Safetv violations<br />
Safetv violations.<br />
Authoritarian leadershin.<br />
boorpr “=‘QnF. better sunervisor<br />
ratinns Of emotional stabi I itv’.<br />
r’nreo I is+ icallv hinh, nonrer<br />
sunervic-Or ratinns O f<br />
emotional stabi I itv.<br />
Poorer ‘3AnF, f-et ter superv i sor<br />
ratinns Of anOtiOnal stabi I i+v.<br />
Fmotinnal str;?i I itv.<br />
Authoritarian Ieadershir.<br />
T i m e s ck?nOec! deoartnents.<br />
fmotional sta5i I itv.<br />
fInreal istical I v hinh, rnor clu+ter<br />
nlacement.<br />
L’nroalisticallv hinh, nOre<br />
absenteeism.<br />
L!nreal istical Iv ninh. noor work<br />
rerfornance.<br />
C!nreal istical Iv hioh. more safntv<br />
violetinns.<br />
i’nreal is+icai I v hind, ?oOr<br />
acceptance assianments.<br />
Emotional stani I itv.<br />
Eetter c=?nnF , more acts of<br />
rhvs i ca I annrnss ion.<br />
Accontance overtime.<br />
,
_<br />
-StJ9J - ECTS F’QqPF<br />
QQFI?ICT’ I1.q.S. QATIWS Or:<br />
Production, Neqro Venta I Grasp, .A5srnteei{m.<br />
Setter QQAQF.<br />
1<br />
i<br />
I’<br />
I<br />
m o r e nrievances filed.<br />
Production, Negro Cooperativeness 3etter o?PRF, ooorer cluster<br />
blacemen*.<br />
product ion, Yeor Conform i tvl! Cluster nlacelent.<br />
Froduc? ion, S!eoro Connetitiveness tafetv viOliz+iCns.<br />
Drcduction, ‘dear0 Confidence t’nresl istical I v hioh, moorer<br />
cluster rlacement.<br />
(In Table 6, all nredictions are in the exoectec direction unless<br />
otherwise noted.)<br />
$ u ‘.: ‘.! A Q y -. 9lthough oerformance d a t a f o r oor3E nav b e acnuired<br />
by non-brofessionals, bv the use of telecommunication nets,<br />
computer pardware a n d commuter softwares, nrnfessione! iudnment<br />
and experience are used in evaluatina those data.<br />
PP3F’E is independent of the<br />
avai labi I itv and cost. geoaraohv, rerson3l<br />
and Q?QEE is designed to conserve tine.<br />
the examiner,<br />
I<br />
There is no oractical linit to the nur!Per of net-sons who<br />
can be evaluated Sv QQ?SE in a fast. reliable, valid and costeffective<br />
wav, and t h i s evaluaticn nredic+. =hilitv and/or<br />
willingness to erofit fron a trairinrl situ-s,t;on . It is felt<br />
DSCSE cou Id Se of sianificant usefulness ir the evaluation of<br />
annlicants for militarv dutv anc’?or for<br />
on militarv service.<br />
t+. eva I uat,ion of those<br />
E!v u s e o f cregression analvsis.<br />
1<br />
base-line<br />
i<br />
berfornance data<br />
of QQO?E can be connared with anv subseacen? retest nerfarmance<br />
data, and the orcbabilities of chanae can ?e calculated. “QOPF<br />
nav also be used in serial studies to det rrine rever5iSilitv<br />
c f batholoaic chsnaes associated with arli/?c, P disease or trauma. -’<br />
* -<br />
.<br />
135<br />
‘. . ---<br />
, .__<br />
. . -.<br />
. -_.<br />
. .<br />
‘.
I .<br />
2.<br />
3.<br />
0.<br />
1335<br />
1943<br />
1948<br />
1963<br />
9. I96 I<br />
19. 1963<br />
I ’. .<br />
I?64<br />
., i<br />
t:<br />
Stroop. J.R.: Interference in serial verbal<br />
reactions. .I. FX”. Psvcho I .<br />
Lewis:, K., Dcnbo, T . , ‘e5.t inq?r, L., t<br />
Sears. P.: Level Of ?sn;ration, I ?I:<br />
Thurstone. I. . !. . 9 ihurctc?e. T.C.! T2.y<br />
Prinarv ‘.+r>nta! Ahi i it;ec. In+orf?rdi.?tP<br />
Form A’!. (:hicaoo: ‘cience '?eSe?rch<br />
Associates.<br />
Vinson, “‘.F.: nL ‘tpct iv i tv in t!lr? 25SPSSmerit<br />
O f the A”vrO+nxic patiPn+. .I.<br />
Fsvc~05oratic 3ese2rch<br />
Vinscn. r.a. 5 Do>hins, I..?.: ?biecTivP<br />
psvc!to lo= ic 3ssescment c.f the tkrotoxic<br />
natient 3nd tbc res0cnse lo trc.?tmPnt.<br />
J . Cl. Fndocrinolqqv rind “etcbcl i5f7<br />
2<br />
V i n s o n , P.D. D. cai i7, C.“. : n5 iective<br />
neasz.remt~nt O f nsvc?c?iololic decline.<br />
rr-0:ced i nOs. 5 t h Int. Canqrpss ilerontc Io\ov<br />
V i nson. ?.D. : flbiectivitv i n t h e ~T~PGSment<br />
Of psvc+ohioloai: decline. V i tn Llumana<br />
Vinfon, :J.f\. . Ecnrnes. T.7.‘. ;J*~s5el I , U.F.:<br />
The: ef fct? o! 31 t--r-.< cnnsc iousni?Cs nn<br />
inf’orm?tiOn rrocessicn. Sc5eI lenner ces.<br />
Labs. 8<br />
!<br />
. . .<br />
. _ 2. .-. -.<br />
t<br />
. . .:<br />
./’<br />
. .<br />
136<br />
.” . . _. c<br />
. : ,. . -<br />
- :<br />
..__ ‘._ % . ,..., . . . . .: >* ..^..... -.. . . -<br />
/
12. I967<br />
13. 1967<br />
1:. I 06-i<br />
17. ic):i<br />
19.<br />
I”.<br />
3n.<br />
,‘I .<br />
In72<br />
1072<br />
IQ73<br />
I”73<br />
Vinson. P.B.: Information nrocessina in<br />
the human suhsvste~. Instrument Sot. of<br />
America (Aoo: lo stlct ion)<br />
Vinson, ft.R.: lnfcrmation nrocessino in<br />
man-machine 5vstems. J. Pssn. Adv. L’edical<br />
Instrumentation<br />
Vinscn, !?.P.: L studv @J the effect of<br />
time o n mental eff iciencv and emotional<br />
stabi I it\:. .-I c. . ~sL.i’ko I . Assn. Annua I<br />
::ee :tn,y, * Piv. 2?, ‘:‘3sh i nctcn DC<br />
?‘inson, r-e.: @erfornance testinn a s i t<br />
relates to iniurv. ?.!at’ I . Onsearch Sounc i I,<br />
Division on ‘-‘edicaI Sciences<br />
Vinson, 2-l.: Comnuter evaluation<br />
rel ative to iniuries. l?th Uat’ I . Conf.,<br />
‘.‘edical P.soect; oc cnorts, Poston<br />
Vi nson, D = Constant, :.A., ouackenbush,<br />
J . ? . 8 -CA;;, .I.“.: C:utomated ncnta I<br />
Status examination: bre a n d post drua<br />
effects. Crus lnformaticn kssn., Computer<br />
Ycrkshob, Ch i taco<br />
Vinson, 0.6.: Automated assessment of<br />
chances i n CICman nerformance. r,rand<br />
“cucds. Savior I’niversitv Cal leac o f<br />
*t etiicine, Januarv ?I.<br />
Vinson, I1.P.: Telecommunications and the<br />
automated men +a1 status exjmination.<br />
r!a:’ I . Telecommunications Conference.<br />
LJouston.<br />
Vi nson. 9.D. : Con conntiter oronrammi na<br />
nredict nest-operative return to function?<br />
Facultv, “?/hat’s ?‘PW i n orthonaedics,”<br />
hn . Acad. orthonnedic Surneons, S a n A.ntonio<br />
V i n s a n , ?.P. P. !-Jarner, f-9.: DfJlJ,TFe a<br />
multi- lannuace automated osvchiatric<br />
screeninn tool. Civi I Aviation “cdical<br />
4ssn.. cuada’l a iara. ‘.Tex ice<br />
137
ASSESSMENT SYSTEMS, I CCORPCSATE2<br />
“PRO??<br />
OCTOFER 12 1973<br />
ASI-<br />
I I<br />
FlGlJRF I /’ .<br />
nEsTaL GRASP: 3.1 TP;E Pr ILITY TC LE.ARN F?cl?: 4 SYWFCLIC<br />
f!F A P9OCESS 01 PROCEZUQE. ‘,;F PTILITY T C UE:3C?ST4N3<br />
PEPRESEb’TCTION<br />
l+C!‘d P SUP-T;VSTFM<br />
SELATES T O F TCTGL SYSTE,“,. .- A’::ITY TC THIKX C F bLTE?fdATE SOLUTICNS<br />
T@ ??C”LEXs.<br />
SELF CONFIDECCZ: 3.4 :t’, IKSIV:ZUCL’S FEELIKG OR EXPECTaTIOb TPFT t’E<br />
‘*ILL ?l4STER A PRCCE2URE &KS/c77 ?LnFP.?h’ PS SlJCCE:?Fl!LLY I N FLiTUFE<br />
AS HE llbS IN TPE PPST.<br />
COOPEPPTIVENESS: 7.0 1ZEt:TIFIE:: ‘n’lTp AUTt’JCEITY. T4KES DISECTION FROf?<br />
yy0S~ 1% AUJ’P3RITY. -IDECTIFIE? ‘,‘::!’ CC?PC!?PTE GOPLS. SUtOFDI RATES<br />
“-TS$::AL * t<br />
S(?ALS TO CCRPCSCTE ;;C.tL’;.<br />
,<br />
_. , .’<br />
133<br />
- .<br />
_ _.__ -. - _. “.<br />
1<br />
I<br />
I<br />
-._ . . -.<br />
. .<br />
: _ :<br />
..<br />
I
Flr,LJRE ?<br />
Assessment Systems, Inc. -<br />
LEADERSHIP<br />
EMOTIONAL<br />
STABILITY<br />
‘\ \ . .<br />
MENTAL GRASP<br />
(c) H.C. VINSON. 1970<br />
139<br />
2<br />
2...*’<br />
. . -.-<br />
CNO:<br />
DATE:<br />
ASI-1734567RQ<br />
FORMAT: “PROBE”<br />
f-)ctoher 17 to<br />
SELF CONFIDENCE<br />
COOPERATIVENESS<br />
. .<br />
.<br />
.<br />
. .<br />
‘...<br />
..-
ASI -1?34567E(3<br />
,’<br />
.<br />
I<br />
140<br />
.-<br />
.: .<br />
.-...<br />
-, I _. .<br />
.<br />
. . __ .., . . .<br />
.A
‘.. ‘, ) :’<br />
,/.,‘.’ : *<br />
,’ i / ’ ;<br />
INTRODUCTION<br />
Prospective Chief Petty Officer'<br />
Leadership/Management Seminar<br />
Commander'G. C. Hinson<br />
United States Coast Guard Training Center<br />
The need for leadership in the military services is axiomatic. At all levels,<br />
managers--those with a responsibility to get a job done--must be able to direct<br />
other men to want to do a job well. Recently, within the Coast Guard, there<br />
has been increased emphasis placed on the leadership abilities required of<br />
E-7 through E-9 Chief Petty Officers. Late in 1972, the Commandant tasked<br />
the Training Center at Governors Island, N. Y. with defining the elements of<br />
leadership a Chief Petty Officer must possess, and preparing a "nuts and<br />
bolts" course to teach these prospect':e leaders this managerial skill. This<br />
paper outlines the procedures that were used in development of such a course.<br />
BACKGROUND<br />
Whit is the essence of leadership? Can it be taught in a classroom? Can<br />
this be taught at all? Can it be taught to.everyone? Can men be transformed<br />
into leaders? A Questionnaire sent to E-9 Chief Petty Officers revealed a<br />
leader to be intelligent, a specialist in .his rate, imaginative, dependable,<br />
capable of making quick decisions, espe'.iilly under pressure, and additionally<br />
one who inspires his men. Could we develop a course to produce such a<br />
person? Is this ttle man who could get the job done? Research in this field<br />
showed that many leadership courses are available. Most courses are designed<br />
to shape men into what sorfeone thinks leaders ought to be. In general many<br />
efforts have been disappointing and a waste;of time and monqv. Even extensive<br />
training at military academies has failed on many occasions to transform<br />
cadets into effective leaders. The reasons for our apparent failures should<br />
be explored, are our methods of instruction? the students or a combination of<br />
these two factors to blame; or is somethingjelsc at fault? What should be<br />
changed to improve leadership training? rla'y studies show that the most<br />
important dimension of a leader is his rela 1 ionship to his subordinates. He<br />
will be most effective when his men respect! and trust'him. But respect and<br />
trust are built upon something deeper than/personality and friendship. This<br />
made a lot of sense when we re-read the chd racteristics cited on the CPO<br />
questionnaire: intelligent, specialist in rate, imaginative, dependable,<br />
capable of making a quick decision, espec'ally under pressure. It is clear<br />
that respect and trust must be resultants/of i performance oriented character-<br />
istics. Fred E. Fiedler has conducted extensive research on leadership at<br />
the University of Illinois. His studies show that it makes little sense to<br />
speak of a g,od or poor leader. There are only leaders who perform well in<br />
one situation but not in another. In these situations, he differentiates<br />
between task-motivated and relationship-motivated leaders. Those primarily<br />
141
Ii<br />
interested in building personal relationships with their fellow workers are<br />
said to be relationship oriented; those interested primarily in getting the<br />
job done are considered task motivated. Fiedler also asserts that a taskmotivated<br />
leader will have a greater probability of success than a relationship<br />
motivated leader. Based upon our analysis of the literature available,<br />
a course in leadership should stress (1) the psychology of motivation and<br />
the satisfaction of doing a job well, (2) analysis and control of leadership<br />
situations, and (3) building a relationship of mutual respect between the<br />
l.eader and members of the group. This was the basis for the Coast Guard<br />
Leadership Course and our challenge was to develop realistic rather than<br />
idealistic leaders. '<br />
HISTORY OF DEVELOPIi;S THE COI;RSE<br />
E-7 through E-9.Chief Petty Officers are Coast Guard front-line managers.<br />
They most frequently fill middle management roles serving to clear the way<br />
for command actions and letting the command know how the cre\/ feels. But<br />
on the other hand, Chiefs are assigned smaller units as Officer-in-Charge<br />
and ir'. Executive Officer. These possible assignments require having both<br />
a narrow specialized knowledge in a technical field and broad knowledge of<br />
the service in general. They also must be able to communicate vertically<br />
and horizontally within the command. Our leadership course was tailored then<br />
to meet these needs. We sought also to provide prospective Chiefs with a<br />
basis for decision making. A leader has three methods of exercising his skill:<br />
he can be authoritarian, democratic or depend upon his men to exercise responsibility<br />
and good judgment to get things done. Although the good leader<br />
must use all three techniques to effectively operate, current literature<br />
promotes democratic leadership or group dynamics as the preferable choice.<br />
This conflict results in confusion. To quote the Harvard Business ReVitW:<br />
"often he [the leader] is not quite sure how to behave; there are<br />
times when he is tom between exerting 'strong' leadership and<br />
'permissive' leadership. Sometimes nc:$ knowledge pushes him in<br />
one direction ("I should really get the group to help make this<br />
decision"), but at the same time his experience pushes him in another<br />
direction ("I really understand the problem better than the<br />
group and therefore I should make the decision"). He is net sure<br />
when a group decision is really appropriate or when holding a staff<br />
meeting serves merely as a device for avoiding his own decision<br />
making responsibility."<br />
We felt a Chief's decision making abilitv wculd be proportional to the information<br />
he had to draw upon. Consequently, we stress the forces operating<br />
in the decision making process.<br />
ii] fiscal restraint..<br />
responsibilit/ ana .* +.hority<br />
(3) legai boundries i'<br />
132
(4) military.- considerations<br />
(a) chain of comnand<br />
(b) personnel evaluation<br />
(c) discipline<br />
The successful leader is aware of all relevar ,t forces and can put them into<br />
perspective when making a decision. He is flexible enough to not fear change,<br />
when change may be indicated. Therefore, the Coast Guard Leadership Course<br />
emphasizes analysing situations, knowing available options, and selection<br />
of efficient strategy. Our course:<br />
SUMMARY<br />
1. Supplies information (or indicates where information can be found)<br />
concerning relevant f?t-ces.<br />
2. Teaches analysis techniques‘to enable one to know self, subordinates,<br />
superiors, and peers.<br />
3. Stresses the selection of the most appropriate<br />
ship techniques, and<br />
4. Provides the opportunity to develop communicat<br />
and producti .e leader-<br />
ion skills.<br />
Our goal at the Training Center is to produce a realistic leader; one who<br />
is well grounded in Coast Guard programs, precedures, policies, and problems;<br />
one who knows his importance in the management of resources; one who is<br />
sensitive to the needs of his subordinates and his superiors. We feel that<br />
our four week Leadership Course can achieve these goals.<br />
APPENDIX<br />
Outline of Curriculum<br />
143
II<br />
Enclosure (3.) to CO TRACRN Ltr 1510 dtd<br />
PROPOSED PBIJSPECTNE GRIEF PETTY OFFICER LRADERSRIP/.WAGEMRNT SEMINAR<br />
c<br />
Monday<br />
a. IntrJductfon to the training center and check in.<br />
b. erecting by Commanding Officer, Executive Officer, Senior Chaplain,<br />
and the ?rcsidcnt of Governors Island Chief Petty Officers <strong>Association</strong>.<br />
c. Rxpl;lnation of the seminar's czcthods and goals - emphasizing skill,<br />
not theory.<br />
d. Admix?ster the Leadership Opinion Queztionaire<br />
e. Prest%jt historical leadership studies.<br />
Tuesday<br />
a. ln dere-.h course orientation; How to accoaplish the job<br />
b. The puychalogy of motivation including perception, role identification,<br />
communicatfons and role playing.<br />
c. Achievement motivation including security vs. challenge quiz and<br />
review (quizzes are personality preference Ueaeures).<br />
. Uedncsday<br />
a. Contir\tat.ion of achievement motivation including .Xa~hisvellianis~<br />
and surmnaty.<br />
b. Leadcr.ship styles; Fiedler Development.<br />
c. Leadet.ship Matrix and review Leadership Opinion Questionaire questions.<br />
Thursday<br />
a.<br />
to<br />
b.<br />
C.<br />
Friday<br />
a.<br />
b.<br />
C.<br />
Situs,ion evaluation and solution choosing - obtaining facts prior<br />
decision making.<br />
Retur. LOQ and discuss the results.<br />
Leadership games, including %urvivaJ on the Moon”.<br />
1<br />
The impnrtance of communicating, encouraging suggestions.<br />
Retake and review of LOQ.<br />
Review and discuss pain research.<br />
-SECOND UEER-<br />
Monday<br />
a. The Coast Gusrd,'including the movie "The Eighth Mission", Coast<br />
Guard Organization alId the Chain of Command.<br />
b. Career Counselling which includes methods of counsellfng, self<br />
understanding, human understanding, evaluation of.people, basic sales-<br />
Eanship* effective listening, counselling problems, interviewing and<br />
specific areas of CC personnel policies (rotation, etc), P.A. assistance,<br />
social security, insurance survivors benefits and annuities, personal<br />
financial rxlnagement,<br />
current job market.<br />
civilian careers both after retirentnt and the<br />
144
Tuesday<br />
a. Drug abusr including types of drug6 in use and the reasons<br />
people turn to drugs.<br />
b. Continuation of career counselling.<br />
Wednesday<br />
a. Drug abuse continued; alcohol, and the Drug. Exemption Pxogrcn.<br />
. b- Dfscipline and grievances.<br />
c. The Uniform Code of <strong>Military</strong> Justice.<br />
Thursday<br />
a. Human Relations<br />
b. Body Language<br />
c. Pundxnentals of recruiting<br />
Friday<br />
a. Hunan Relations<br />
b. Public speaking<br />
c. Public informatioa<br />
THIRD WEEK<br />
- -<br />
Nonday a. Human relations<br />
b. Budget process including Sub-Head 30, planned projects and procuremeat,<br />
cost consciousness<br />
c. Allowance lists :<br />
d. Procurement, including advance planning, work lists vs. procurement<br />
lead tFue.<br />
Tuesday<br />
a. Human relations<br />
b. The Fersonnel Xauual (CG-207)<br />
Wednesday<br />
a. Human relations<br />
b. Tftlc "B" property<br />
c. Personnel records<br />
d. Health records<br />
Thursday<br />
a. Pay md Allowances - pay offLce proceedures<br />
b. Tr3vel - household effects<br />
c. Leave, leave rations, computation, types<br />
d. CMXPUS benefits, elegibility, orthodontics<br />
e. Efedfcal<br />
f. Rctixment<br />
145<br />
.
Friday<br />
a. Educational opportunities, including se F ice Ischools, ' 021, USAF1<br />
and off duty education.<br />
/<br />
b. Advan&ments, including career patterns, servicewide exams.'<br />
C. ' uniform regulations.<br />
* ..' : ,, / .1<br />
d. training procedures.<br />
-FOURTH -WEEK<br />
'<br />
,<br />
r<br />
Monday I<br />
a. Organization, delegation and planning of relatkvely complex<br />
tasks, including long range goals and intermediaty objectives and<br />
the need for follow up.<br />
b. Clerical, including effective writing, correspondence manual.<br />
Tuesday<br />
a. Forms. reports and publicitions.<br />
,b . Classified material, handling stowage, access and the need to know.<br />
Wednesday<br />
a. Records keeping<br />
b. Rules of leadership.<br />
Thursday<br />
a. CPO responsibility up and down the line: a general summation<br />
of what a CPO is, ranging from accident prevention through experience<br />
to rumor control by not participating in speculation; a chief's<br />
presence in the work area not the CPO mess; supporting the service<br />
and superiors and not adding to gripe sessions with junior enlisteds.<br />
Friday<br />
a. check out<br />
b. Graduation<br />
and Ethics".<br />
- CD or SO address class - theme "Personal Integrity<br />
I<br />
ADDITIO&U TR%;\fS& ASDESIREEb(NTCHTS)<br />
1. Self Taught Reading Xmmrovement<br />
I<br />
I<br />
2. Self Taught Xathematics Improvencnt<br />
3. S&f Taught Spelling Improvement<br />
I<br />
TEXTATIVE SEXIS.AR TEXTS<br />
1. 'What Every Supervisor Should Know", Lester R. Bittle, Second<br />
Edition, 1969, S&raw-Hill Inc.<br />
2.<br />
3.<br />
Naval Leadership, US Naval Institute i<br />
"Right Down the Line", Charles A. Pearce, Ed, 1955, Arrow Books, Inc.<br />
4. Various articles on motivation, achievement and leadership.<br />
Q<br />
146
.<br />
9. r‘. Zaldkoetter<br />
Sxsc1t~n:. 3ducntizsal p1 Persame System<br />
Indiazspclis, Indiaca<br />
Zte aide exmmc tkait is zrceived whez we beg+= to tfrl,r.k of ways to<br />
>rcr2fe 5f.e best UBPS Jf f.mn= resources lzxludes 63 ruch for attentim, we<br />
zay te disturbe:! by tbc c.zcle::ity of this challenr;e. T'ho tuxan subject,<br />
rrker sr friend is nzt easily directed ir, :hc zest Fredictable charnels<br />
l;itLxt occasi:r.a1~:' irterferin:: with our pz-:gaaed @OHS by tice, territsrial<br />
x value diffcrczces. In bchaviljr31 ..A. --ii*icatix, -_- trninicg, and<br />
zther c:c.:iti~cizr ., csrc-i . - -cc-- -_-I ti ( a rath is s:c=ht which CBE ~ivc desirable<br />
sthi231 ar.d er-ducti*:c >osu!.tr; zi.tb.xt zhc zaj3rity :f pors:ns involved<br />
resi,stir,g the .A.rc;ti~r. ir. zhici't rtzay;ed 3r coopelled to f.:Llxl.<br />
;‘vcr the pars i:; tke CE~C~V~X addressed under the quest for adequate<br />
Dcrsxncl evaluatim tcc:;A.quss, tkere has been such cznccrn nbxt evaluatisr.<br />
~?.icfi car. reasin.7bl:~ 3Gsxrc tht a perssn cith acceptable ski.11 Is<br />
selected :zr r?.ntcvcr CT?: rtunitcs fxnd a-ailnble. ?niS &JOn~ ha13 n3t<br />
ditilnis!te~ as frc.7ter an2 pester effort bas been sade tc, cot at skill,<br />
ability, interest an5 ~c2smality factors-n:hich are ‘celieved to be fundaynta1<br />
sources SI xtrintxe a.C!'ccti3,C; beha-rizr. As a pssiti0n guiding neasurer-ect<br />
~esi..7z., a crcerint d-.l‘:t is i**equeztly dimissad ahcnever the<br />
pr3tlan >,I rt:tirati:r. is cxed or tttc “sense of tinicg” evarencss shorn,<br />
rhic:? 5ztt see= t.3 i~~lae~ce 5rw s’killcJ betavior Is brought t:, bear in ar.7<br />
nrctlezt situ3tix. :<br />
Xth scze ncEunrti?s abxtt ti%t hve !xen teneratin~ forces for<br />
-ers-v-e1 ~ -...* oval-2tl2n - zasuris~ icdiuidna1s prinaril;' with paper-porkcil<br />
tests ir. aci:icsia; s.xc txe 7f criterix be:?avior -- there ramins also<br />
a prizici* qwr:isn rc&l2C; ttc deslro and y?rsistence nodded to zaintain<br />
beha-:i.or ZCtc? tk? recxd ?C ackievezezt and aptltudc is indo:ccd.<br />
Yhettter ~2 view pcrs:ns zeasurc5, ?;ettlnl; ready to be noasurod, or at a<br />
specified ir.terval after zeasarcxnt, a constraint is still Dresont that<br />
zakes one feel xxctsy t3 tte de:.-ee that a strmg level 31 continuing<br />
nstivntizn io as! f:rcsccablc mlcss very M.gh qualifying exwdards are<br />
603c?:17W mplic 5. ‘.?lat ~~~icc Is ttore 2x preparing ladcncndont bohavisr<br />
:hat till di?c:t’ly Cr’ntribUtC t> zcater efCiciency aad eventual proixctitit-.’<br />
7ite17*Jt . -<br />
C 3!2:13?td?<br />
e -m’-eiw- - -: ..I’ u;:‘n y~ssivc and coercive nettsds and authcritdmn<br />
147
partlcularl:I eco-centered ~Jals. Under favorable conditions individuals<br />
will espend thzir Lntclloctual and ecotisnai r‘cs5urccs to attain specifled<br />
gDal6 lrithout being &wed by harsh instructi and built In wnlshzent<br />
should the:: not coz??:; to set Schcduks. Again, if :rm will’susynd<br />
critical anal::sis fnr a zmcnt, I as.4 that you give your attention to t!!o<br />
concept that the wst CJnVinCinG teaching and experience results from your<br />
havi2,r pereitted :-xrself to be 63 enmossed in a practice task, :.-our sense<br />
of anareness is nl5r;t nhJ:l:; mided 0:; the.culnisatisn 3f the action<br />
bein,-’ ncrf Jrx:i . Y.Ju s2ck -ml; to deterzinc the depee of cxccuti2n by<br />
vhlch & zi;:-.t schievc n finer s:?23c ?f ncrf:mance and in site m:f<br />
aILsi :y.xrself tz eva:~t~te the rcsu?t.s in term .;r' s?.ze L‘thar instructive<br />
cspericnce. ‘ihen tf;c activity d-CS 7t SatiSf; ;-J,u :?ith the anticipated '<br />
re6ults 5or.e c:rrCcti';e fccdtnck is USUa:::’ ??D?icr! t.3 cbtctin adjusted<br />
future res~:r.scs t.7iar5 the pn3. Eel+-cted. Selaz [lid critical ir.rir;ht<br />
xcur where .ac c':u?d osccrtnin tkzt ra:: cmrgy cat-3 r.3t tte nrir-ary<br />
33tivntin,: fC7rc?, but :-a',':lcr lith :te prnmr . nr^7uzt 3f er.er.blce-nolvinu
shift the level of consciousness to let go of value-laden thought.<br />
Although not qulto In the 6838 way as hypnosis does, ?H produces with the<br />
nantra applied, an almost Immediate change in the mental and physiologIcal<br />
functionIn& (?fallaco, 1970). ,<br />
!<br />
This short dofinitlm Is practical1 y all that need be stated about<br />
the Identity Jf ‘l?! as a technique. The agreetent on Its potential effect<br />
in personnel avaluation is yet t3 be aclcnorledged. 30 let us exercise<br />
the collective IanglnatIon to,the point that we may odtit thero at-o many<br />
contingencies which vi11 become evident in future developments in personnel<br />
evaPJatIon, testing and personnel system analysis. Tilding the crest of the<br />
wave with statl6tlcal applicatisns and computerized innovations, testable<br />
b&avIor has been nainly explaloed as s dependent variable In response to<br />
tasks set before tho hunan subject. Rchavlor not directly percoptlble<br />
through the nearly always observable channels of peychonotor evaluation Is<br />
routinely dlsmlssed as not in the te-rltorial interest of the practicing<br />
behavioral sclontlct. Few mdels oi hmaa behavior should be propxted<br />
to account for altored states of consciousness and connected behavior.<br />
How nay these Stat06 tend to produce systematic varlatlons in behavior when<br />
all pOsSibh characteristics In selected individuals appear to bo Blnllar?<br />
Xhat Related Ideas and finctions of ‘37 Can 3e Sug,ce.sted?<br />
To a certain oxtent I believe NC Clelland (1973) psas speaking of<br />
something bordorinl: on a change In a theory of testing when he 6pecIfIed<br />
"tests should involvo operant as ee2.l as respondent behavior," and "te6ts<br />
should sample operant thought pattern6 to get naxlnua generalIzabLlity to<br />
various action outcome6."w Re goes on to say that “the. tester of tho future<br />
Is likely to Get farther in finding genera,l.Izable competencles of character<br />
lstlcs across llfo sutcones If he starts by focusing on thought patterns<br />
rather than by tryin,: to Infer what thoughts must be behind the clusters of<br />
action that come out In various factors in\ the traditional trait analysis.”<br />
Yet tic Clelland has only seen fit to apF?O'obCh the edges of the testing<br />
dolaIn, and bala.&cas precarl~us1:~ to not ihply further that ne rzl;ht well<br />
.Study those p8ychdc variables mhIc:1 IrngactjLn the levels of cm6cIou6ness<br />
and thus affect the thought patterns.<br />
To get behind thought patterns<br />
/<br />
for testing pUrpo608 Will avo Interested<br />
sclentlsts no end s? culzzlcal moments -&en the:: attenpt to learn about<br />
the uconditlons 05 knowInS” that many of ,us were taught to be forbidden<br />
territory. ?oday obscrvln; human enerC::$research trends In Sidphgoics<br />
and such lnter-llscipllnary flefds, If thk ps::choph::sIclst does net emerge<br />
to work nith the innwotlve experimental clinlcan the future of huzan<br />
understanding and rseaouzenent could well be recove4 froa the hands of<br />
conservative theorists and Rractlticncrs. Row could this ever happen<br />
experienced pspch~logists will say attempting to think c.f flaws in the<br />
1sgi.c of this pawr. Very ueli., then, I ~111 hazard several predictive<br />
views under uhlch, the TtT influence cf defined level6 0: csnscieusnoss<br />
might g-lvc dlffcring lnsl5hts in relatisn to how certain varletloe of measured<br />
behavix- czuld be interpreted. The-, II these predictive vicns do<br />
149
-<br />
not suggest t3 you ieasibic stud:: probleas, I m nistakenly fo2loHng<br />
heretical research dcvel.:yzeats, and ?crsonnel evaluation should bo<br />
declared oni:: a field for refinlzg proven p s*.chnetrl.c.- nethodoloc:.<br />
ViewIn,- the 1?.: t.ccki;uc 2s a suppwtive treatcent, let us visuailze<br />
a situnticn rkere the measured out:ut 3f an i r.Y.vidual halo becorc erratic<br />
bqd this ~erson.scens to be chrzicaliy fatigued. L7nleGG there is some<br />
p!ysiolo&ical da:al;o, h e sh.-.uld rc?i;n%a :.y:: 136s af applied sk 1: whcro<br />
zcr=al 1~8s is nst eqectcd to ncconyan:: a hea-.--- dcgqdatlzn cf shill. 3;<br />
i:Sl.Cg TiI ,:;‘aIlace (1372) :?as ‘:rith aWaxeS zedfzators, aLtin& as their owfi<br />
cxltr2ls, EkX7ed c!xnps ir. ‘T:n (zJre fr3nta: aZ.Sm ar,d theta zavcfz), a n d<br />
decreased i.:
his own hypnotic condition and complement hls further awareness by the<br />
practice of TM. A heightened sense oi huslaa understanding l.6 hypothetically<br />
in re:ch and the behavioral scientiet would be ready to analyze aspects<br />
of con&clousness all but left to loagiinatlon and theological epeculatlon.<br />
This exploration of inner space, a6 rith outer space, will not appeal to<br />
everyone and to some conjures up threats to sanity and human integrity.<br />
'TO motivate selected individuals to undergo experiences which to date have<br />
generally proven enrlchlng should add excitement to learning while<br />
nexlmlzlng performance and creativity.<br />
'i'he person xho risks his own +ell being and those material thing6<br />
acquired in the fully consclou6 realo of “getting and doing” will likely<br />
reconsider goals and values when exposed to dlf?erent levels of c~neclousne68<br />
in which spontanecus trial and error behavior are minimized. No<br />
tendency to coerce learning and task behavior should be tolerated, but #hen<br />
Selected lndlviduals master the experience o? altering their conscious<br />
levels o? awareness, they will COEtribUte to greater creativity and COmposure<br />
in their living and aorklng environcents. ?3oth political and rellglous<br />
mcvenents have long profited fPC= having dedicated IndlvlduaIs and A<br />
structured soth3d f>r preparing thegselven to face challenges as potential<br />
opportunities. In effect to know ahat ConSCiOUS variables In alterod states<br />
of awareness oren a su'bject to greater insight for accelerating learned<br />
behavior, I believe, till reveal human resources only seldom tapped and<br />
hardly ever articulated a6 hopeful attributes for personnel evaluation.<br />
Jill you accept the existence of such a tochnlque as 3-f and does It<br />
threaten or serve as an optlnlstlc note in searching for well-integrated<br />
neasures to predict the ?Urther cxplorstlon of promising human resources?<br />
Can We A??Trd to Deny Consclouszcss In ?ersonnel 3aluatlon?<br />
‘ilhenever a prajacted diiscusslon of something like "states of consclousness"<br />
Is put lnt3 written words, the fear that much of what is said till<br />
seem beyond ratlonal csnprehension beglns to be very constraining. Cn the<br />
other hand the scientific endeavor to plow a new section of thought is even<br />
nore prOvoking. Xhether ny discourse on 'I?! will pique your interest or<br />
dlsdaLn, really should not matter. The bnslc purpose is to Ict a newer<br />
theoretical and experimental alternative be expressed 60 that personnel<br />
evaluation does n3t become toa self satisfied with fundamental principles<br />
Of human performance and the OriEln Of measurable qualities.<br />
Surely, operant behavior can be interpreted as re6ultlng from the type<br />
of environment the individual gains through sensation and genetic inheritancd.<br />
!?owever, that cnvironmenta I awareness Is able to be altered by<br />
variables which are not immediately perceptible. Zverythlng from the electromagnetic<br />
fields and atc-spheric changes to psychic interference by adverse<br />
energy patterns developed betKeen individuals seem to give us pause to<br />
wonder what does cause behavior we tr:: to evaluate. In a recent publication<br />
Roger6 (1973) made an cnllghtenlng review of sO%e new challenges in psychology<br />
and pointedly questioned “Is tMs the only reality.” You need not<br />
151
.<br />
agree with his own brand of cUn.lcal nethodology, yet I ieel great<br />
significance must be put in his serious query about needing to risk<br />
the investigation of parazornal phenorzena. 1<br />
I<br />
An approach as seen through the channe 1 of 34 can help open the<br />
dark door so prohibited by conservative investigator's rhon inquiring<br />
about the other reality. ?lhere hunan belhge in all walks of life have<br />
ca.relessly sought A finer awareness with druEs and other Illusory substances,<br />
that behavior has led tJ zany uzlxtuate incidents. At least<br />
, two Army general officers have a+~ xnted voluntnrll~: lctrsduced presentation<br />
of 'I?!.? for interested personnel, but oz'::: after being initiated into<br />
'I?i practice thecselves and ex?cricnclng pors.Jr,nll:; favorable results<br />
(Xastnan, 1973,).<br />
E:e;? hcrizms are ss)ing to be atta'ncd nrj uses ,>f T!4 and other statos<br />
of cohscio*Jsncos are delved ir.t:l t.: discover the mxr‘ceo and functlorJng<br />
of our thought patterhs. If TP chcmntcr new e~perionces that are eelf<br />
actualizing fx us and lead t-7 firmer !xwn~~v vit.b.ln and be::md our om<br />
de.goes of awareness, we can be z~crc cor.fldcr.t in understanding Individual<br />
behavior me a.re nssessinc and hxr tuzxm rctxurccs should be properly used<br />
in diversiried x-pnizati:nal c:i.zztes. ?!:cre wl?l be a constant urge to<br />
study the cxzplete huzar. s::ster in the iz?erncti:n with cnvirsnnent and<br />
productive results being obtained. ?he or.rtnsis on nethodologj in ovaluatian<br />
cannot bc i~~.dred, honcver the finest nxl:r616 procedures are nisgulding<br />
if we ‘are not evaluating the &ills or tFalts wo think should be undor<br />
study.<br />
i a<br />
In scarc:lln!; fu'r vat-'i~-36 ways to Deaf&e and apply our findings<br />
about human behavior, the t$oa!- to cor~crve e,zd develop a full range of<br />
hunan resources nust not be &cited b,.v lack isf newer concepts and subsequently<br />
nethods in personnel eva'uatim. !hrinr a recent lecture by a ~011<br />
known astronaut (!Xtchel!., 1973). hc stated that he was cosvlnced of tho<br />
teed to study the spcctruz of c~~~sciousnesslllspla~rcd by the hunan organlan,<br />
2nd !?.as rrcnfized sxh a grou-3 f3r this iury;ose, Yiith this kind of forceful<br />
inquiry t&in{; place in ntmerous internqtional study canters, we nust<br />
a,reo such tochniquos a~ IF: 6hould bc sf 6+ intcrcst in xur pr3fessiocn?<br />
franc-of-reference. 'There are hu.zan abilit,,lcs which apparcntl:r transcend<br />
the traditiUnal cxpln?,ntioz ~,f h.-‘w sirilis 'f'c norzaiI; acquired and used.<br />
;‘le should see .;'h?t effects these abilities hn*~c 38 031‘ thexics and evaluation<br />
practices eve:1 if the y-zqxxt seezzb unsettling. In the ho,ne that<br />
my discussion ;f Tra~,sccrzcr~aZ . .- ." ::cditnti>r ins r.:t n?lcnatcd :-3ur 3cIn<br />
creative urgesi;, I 7n5 ivith t&n %?ditativ/ rczark attributed to Pinstein:<br />
11 . . . Imaginatix is r.3rc byrtant than :*&33ledpz.” _.<br />
_. -<br />
,<br />
.’<br />
152
Abram, A. I., Paired associate learning and recall: a pilot study<br />
comparing trnnscendestal meditators al.th non-meditators. Unpublished<br />
progress report. Serkeley: University of California, Education<br />
Departnent, February, 1972.<br />
Doucette, L. C., Anxiety and transcendental naditation as an anxiety<br />
reducing agent. Ucpublished paper. Yar,l.lton, Canada: l!c PIaster<br />
Universit::, January, 1972.<br />
%stnan ) M . , The military neditators. Ilrny/?:avy/tir Force Tines. July 4,<br />
1973.<br />
Fiske, 5. 3., Thousands finding neditation eases stress, The !:ea York Time,<br />
Bxenber 11, 1972.<br />
Kanellakos, D. P. and ?lel?in, iv., The practice of meditation as a rceans to.<br />
the fulfilfzent of the ideals oI hunnnistlc and transpersonal peychology.<br />
Paper presented at the 10th Annual Meeting of the AaeoclatIon<br />
o f ~uzmnistic Psycholsg:~, IIonolulu, Hawaii, Axguot, 1972.<br />
>!c Clellaxl, D. C ., <strong>Testing</strong> for corpetence rather than for @8intelligence18.<br />
tierican ?sycho?ogist, 1973, 3, 1-14.<br />
Mitchell, 5. D., cuter space and TSP. Lecture presented at the 3rd Annual<br />
PSI, Inc. Conference, Indianapolis, Indiana, October, 1973.<br />
Xogers, C. il., Soce new challenges.<br />
379-337.<br />
,‘J-,orican Psychologist, 1973, 2,<br />
Vallace, 3. K., ?hyslolo&ical effects of transcendental nedltatlm: a<br />
proposed fourth state of consciousnesn. Unpublished doctoral dlseertation.<br />
Los Angeles: Unlversit:; of California, 1970.<br />
.’<br />
i<br />
i<br />
/i<br />
-.<br />
.<br />
:<br />
:<br />
___- ’ . -. -<br />
-e- ,- .-.~4-~--..~~-A-~~.&<br />
.\<br />
. .<br />
‘\\ . * : ,;<br />
’<br />
. . .<br />
‘\ \‘<br />
*. 1<br />
‘, ( _<br />
\ : -. . ;<br />
I<br />
/,,<br />
:’<br />
_. __, . .<br />
-..;c.- _<br />
__. -c-,:<br />
.i<br />
. . .,<br />
.<br />
d,t s t: i,@<br />
.,<br />
153<br />
. _<br />
3<br />
.<br />
-.:
'.<br />
"IMPLICATIONS OF CARREL INSTRUCTIO!I ON MILITARY TESTING" A PRENARRATEO SLIDE PRESEhTATION<br />
BY OR RONALD W. SPANGENBERG OF THE MEDIA DESIGfi & EVALUATION BRANCH, TECHNICAL TRAINING<br />
DIVISION, AIR FORCE HUMAN RESOURCES LABORATORY<br />
AFHRL TECHNICAL TRAINING DIV. This presentation is entitled Implications of<br />
MEDIA DESIGN & EVALUATIOJ BRANCH Carrel Instruction.on <strong>Military</strong> Testinq, Ron<br />
Spangenberg speaking. For proper synchronization<br />
you should now be seeing the specially designed<br />
focus frame of the Media Design & Evaluation Branch<br />
1<br />
.<br />
of the <strong>Technical</strong> Training Division, Ai.r Force<br />
Human Resources Laboratory.<br />
Focus frame<br />
1<br />
I<br />
CARREL INSTR!!CTIC!N<br />
Student working at standard<br />
carrel<br />
I<br />
2<br />
Basic Electrical and Electronicsf<br />
School, U.S. :Javal Training<br />
rCenter, San Diego, California<br />
Students using workbooks and<br />
programmed texts at "dry" carrel<br />
Some test equipment on shelf.<br />
4 :c<br />
.<br />
._.<br />
What does carrel instruction mean to you?<br />
For the Navy student at San Diego it means a<br />
programmed instruction booklet with some<br />
items of electronic test equipment.<br />
For the planners at Ft Benning, carrel<br />
instruction means media.<br />
::. :.: HRL Media D/E Lab -2<br />
. .<br />
..\.<br />
I'.<br />
.<br />
_<br />
i'<br />
-. .<br />
'.<br />
,'<br />
__-. e-..--<br />
. '.<br />
..__ .'.,<br />
-- . . .<br />
-- . .<br />
..-..<br />
* .,*
,., ,. . . . . . .<br />
:. . . . ,.<br />
,. ,: . ,: ,. ,.<br />
.,<br />
. : :,:. :. . . :, .,I.<br />
. . ., .<br />
,:,. . : .: :, :. :, :;<br />
.:.: ,. . .<br />
, : ,:,<br />
~~--‘,::z...:...~ : . . . .;; ‘..:..~:.i’-r~.....:.~.~.~.~.~....,..::?~..~.:r’.. :y... ‘..& ..i<br />
.:,.:,, . . ‘:’ . ..y... ..:.. ‘.::%‘:.:.:.: :. _,,,. ,._. ,, ._ :,<br />
..i<br />
..~...::..r..,,.,<br />
. . . . 1. ,.. I..:: .:.:.:<br />
$.s:’ ,.‘.‘. _> ‘:_ :x:y:<br />
.:..,:::,“?.<br />
,,.,._ :.>. .<br />
:‘:,<br />
. . ..i.‘..._ _ 2,. . . . :‘.: ,._, (,_ > ‘.:‘:.: .;c,,.: ‘,~.‘;~~<br />
....: .y:... c :::.: :.: :fi:.‘,‘;. ..:..:.,.:.:-..: . . . :.>...,1 ; .:::‘: ..:: (,,.,<br />
.._,, . . . . :::.: :.>:: ,,.., .“’ ._ _,<br />
.o:... :. ..>, ::.::~;:..:..: . . :,:::<br />
CARREL INsTRJCTIoN _ S+ANGENBERG<br />
:, .; ‘,:<br />
.: ._..<br />
..<br />
,,;‘,:y. :.:... ::....>:.>::. /’ : .,i ,...: i:,..::iT.‘,;.‘,’ :c:::,. .‘. . . :,,:,.:...<br />
,>:. ._,... :,:.., ..:.:.:..A:; y.::‘..,., _<br />
. . .<br />
:I “:: .:. :.... :’ ,, ” “‘_<br />
‘.. :::..<br />
,.,<br />
.: .:<br />
.,.,<br />
;:. .:.:::j::. :;.;<br />
; L ,><br />
.,_ :,., ‘.:<br />
: . . . . . . .:,.,; . ': :::. . .,. ...'.:., ., 'k, ..:.:p ,_: ...: 1. .,.( f_. j:.: '.. .5' ;::. I.. _,:j: ;:$<br />
..:.:. .', "<br />
:...:k':.:.::. .,. .,...:, ,~ ..:;.:.; .2 .-: :'.:'.:.:'~I.~ .'._ ,(,, ,::<br />
.,:. ..: .:., :.:: . . ..j.... ,.___. _" . . . . . ; ,;..:.. .I:'.':'<br />
.<br />
..'. .:.,i ,,<br />
: ,,. f. .)... : ._..: ._ :..::: :: : . . . . :: "i' ,:,, . . . _.,.,.: . .:::. .<br />
: ..:%.j::<br />
:,. '.,Y ;y:;;;. ,, _, :...:::;:<br />
.. -j;?.;<br />
AL!IO-&lJAL SroKY BOARD -/-l-r<br />
; i<br />
('. College Library .$$ The carrels atfkt San Jacinto College have<br />
Mt San Jacinto College<br />
: ':i:i prenarrated filmstrips.<br />
: ._ Gilman Hot Springs, California<br />
,,;<br />
. '.<br />
Study carrel containing: 35mm<br />
filmstrip projector and reel to<br />
reel tape playback unit.<br />
5<br />
UPT Learning Center<br />
!Jilliams AFB, Arizona<br />
:<br />
.: .".'<br />
Two-man study carrel designed<br />
to represent T37 cockpit. Audio<br />
selection directs the student<br />
through procedural lessons.<br />
**<br />
I<br />
2<br />
II<br />
Undergraduate pilot trainees at Williams work<br />
'4th A/C control panels<br />
6 )<br />
I<br />
. '.: . .<br />
I<br />
Student watching Audiscan w4th<br />
Security Police Training<br />
Program<br />
while Security Police at Lackland can view<br />
c. audiscan presentations at a table.<br />
.: .:<br />
8<br />
7<br />
: :'.;; ;'<br />
._ . / . .I':.:::<br />
: ::..<br />
EDIA DESIGli & EVALUATION LAB<br />
FHRL<br />
:artoon figure with media<br />
equipment<br />
‘..,<br />
.'<br />
.. .",': .y ..:,., ., ,,;:<br />
:A<br />
. . . .<br />
4<br />
.<br />
I<br />
: I<br />
I<br />
.:<br />
-.:i<br />
: .:<br />
:_:. . .<br />
.I _’ .:,2<br />
. .<br />
. ..‘_.. .:..<br />
-::<br />
., : :.. .,,; ,;, ‘I:‘. j:;- -::.~~,~:::~;j~::~~~-- ‘.;y;jj<br />
:; .,<br />
,: ,....., ( ‘. I..<br />
..,. :. .’ .: ,_ ‘.“::;$<br />
i<br />
I<br />
-.;;<br />
..” j..: 1<br />
;I,: In the Air Force Human Resources Lab the Media<br />
. . ..$ Design and Evaluation Branch of the <strong>Technical</strong><br />
'> Training Division has been involved in<br />
. .-..... . . ._... ,..: .:_, .:.: -i::.. y _.-, ‘: -:.; HRL Media D/E Lab -2<br />
,.::<br />
155<br />
.
,.‘j :,;. ‘.: :. .:‘.<br />
,_:1......<br />
.'.::.I :.'. :- . . :;.,:<br />
':., .: 7.; . . ....'y CARREL INSTKUC llUN - M’HNbtlYUtKb<br />
,, ,. 'i'.' : . . ..z : ., . . . .<br />
:,,: y.<br />
. . .:. . . . . . .<br />
. . . . . . '.<br />
AUDIO-VISUAL<br />
., ,;. ,/. ,.,. "..: '. ..,, _;._.<br />
SIDRYBOARD mmL/lT<br />
:.<br />
:. . . ,, :..::.<br />
'.. .,<br />
.,_.$<br />
.". . 1<br />
.' ,'IcARREL DEsIGti carrel design and instructi'on for several years.<br />
I<br />
Carrel with calculator and tape<br />
recorder visible. Hay to<br />
operate calculator orogram is on<br />
screen.<br />
9<br />
.<br />
,<br />
kleapons kchanics Course<br />
Compulsory Pezedial Trainin?<br />
Lowry AFB, Colorado _<br />
Students viewing prenarr-dted<br />
slide presentations at standard<br />
carrels.<br />
Mechanics Course<br />
Compulsory Remedial Training<br />
Lowry AFB, Colorado<br />
Student viewing PSI&6 Program<br />
at Performance Aiding Carrel.<br />
ar;el inset for Performance Aidi g<br />
arrel with wiring for makinq<br />
ontinuity checks.<br />
I<br />
I<br />
5!e have set up carrels in a learning center<br />
oresenting prenarrated slides most frequently.<br />
Ue have designed and built a performance aidinq<br />
carrel in which an inset panel in the center of<br />
the working surface can be r-enlaced with<br />
specific oerformance boards,<br />
Such as this board designed to refresh the<br />
students in making continuity cheeks with the<br />
PSM G Hulti-mzter.<br />
156<br />
. HEL Xedie D/E Lab -2
-. ,. .: ,. . . ,;:y. .::‘;:; ~:““:.“‘:.::‘.:.‘..::,y.~:,:,;.~ ::y,. _:., . . . . . . .:. . . : :_:.:<br />
.‘. ,. : ..: .: :_: _’ ..:y:i:;:jc:, ,,. :,: ,: .j ::I ,,.: . . ;., >( ., :;<br />
.;’ .., :;’ .:<br />
::.::‘:i;.i ‘::.:..‘?j i:.::.: :.;:.:<br />
., ,.. ,,.. : .: .P. :.,,<br />
,.:.‘...::I :..::. .,,_.. , _ _.i :.+ : :; ..: :., ,. ::::. ‘_’ ;.: .::.:..;,;j.j,::: ‘: .y ;:; :..<br />
:, ..~~~,18: :.:., :.:i ,. :,:, ,: ,....<br />
._.:.. .: .:: ..,... ::I:..: >, _,:<br />
,’ ..‘.~.‘,.:.. :_:..: _,.. . . . ..I :. y:. I.:..:, : ,:.. ,. .:: .: :,j”:::. '..> .<br />
.,.:.: .:: ','.. /j.,.: ,, :. :: .: CARREL INSTRUCTION - SPANGENBERG<br />
':. ':~.:::~,:~,::::::' :.,.,.,.,... ,;:: : .,.:,: iii.:' :' '.: . ...::::.<br />
.;::: ,:: :: . . . ..'.i:~.i:;i,::'i: : .>, ,:., .: .:, ..,:. .,_.A . . : :: ,:; ,:.: I,,.. . . . . . . . >:<br />
AUDIO-VISUAL SroRk BOARD<br />
: ,. . . . . . ..i:: AR&T<br />
. . .'.:.! ,.' .'.:. :. >,::. .:.:::.::':::,. . . . . . . . ,.:+.: . . . . :. . _,. . : :, 7.. .: ,.(. x:..::.:.:. .,:,:. _:., '. i.: . . . ,,.;:;;.i,.y ,:::j. :_ ,, ::;: z.1: ,:: :<br />
:.. :,.\ ,::,. j ::,g...:. _.. :_ ..,:, ..::. (1) carrel instructional materials must be<br />
. .'. developed, produced and validated locally,<br />
Carrel with two Carou.sels, 3nm .'I.:.<br />
: projector and audio playback<br />
unit. Random access slide<br />
selector is shown.<br />
1 1<br />
. .<br />
13 '..<br />
'. .:<br />
, ,.<br />
INTERACTI'jE CARREL<br />
Interactive carrel with txo<br />
persons viewing an Audiscan<br />
presentation<br />
14<br />
., . . .<br />
(2) carrel instruction is not necessarily<br />
performed in isolation. We feel students<br />
frequently should work together as in t;lis<br />
interactive carrel which is designed to help<br />
two students work together - or for an<br />
instructor to help a student,<br />
IThe Audio Visual Instructional<br />
1<br />
(3) the third conclusion is that a carrel is<br />
Display System module being' : not a piece of library furniture. This portable<br />
demonstrated by a secretary carrel can be rolled to actual pieces of equipment<br />
and provide hands-on instruction. Tousa<br />
carrel could consist simply of a set of earphones.<br />
1 5<br />
.'<br />
:ARREL 1!6TRUCTIOEI<br />
.‘.. ,..<br />
': .., ,;:, .' : ,.<br />
: ._:'I $':.. : y ] ..;; _, ..I<br />
.,. :<br />
ltudcnt working at standard<br />
.arrel.<br />
16 . ., ,, ::.‘I. y-::<br />
-<br />
Lw-4--<br />
. ,.: ” ‘.._<br />
. .<br />
i You can conclude carre 1 instruction may be many<br />
,: different events. However, there are three areas<br />
1'1' of evaluation which will receive increased<br />
emphasis due to the use of carrel instruction.<br />
_._-- - --A.--a*a.&d-- HRL Media D/E Lab -2 ,<br />
:. ,..<br />
157
1.’<br />
.;<br />
:,:<br />
‘. . :,<br />
.,.. >.:..;;:<br />
q:,:.: :<br />
,,Y<br />
. . .’ (. . .<br />
....<br />
. .<br />
:_. ,’<br />
,”<br />
CARREL IZiTRXTION<br />
QlJAL ITY CC:ITROL<br />
Student working at standard<br />
carrel.<br />
17<br />
::,RREL IWTRXTIO:!<br />
Q’NL ITY co!moL<br />
t?OD’JCL: VAL 13ATIO:I<br />
Student writing ai standard<br />
carrel.<br />
18<br />
Cl4RREL<br />
I!;ST2lJCTIOiI<br />
LEAR:iIi
.<br />
.,. .,<br />
. .<br />
:...: ,; : :.,.<br />
‘, . :,.: . . .<br />
. . ._.’<br />
.' ,'<br />
,.<br />
'.:<br />
.:<br />
:.<br />
.....<br />
,:: : ." CARREL INSTRUCTION - SPANGENBERG<br />
AUDIO-VISUAL Sl’DRY BOARD ‘5. AFKN,/TI<br />
“..<br />
‘.<br />
(1) Pylon wing stations are<br />
nulilbered :<br />
r<br />
I .<br />
22<br />
(d) 1, 2, 3, 9<br />
3.v g3s nrefssre<br />
(bj Gy hyc..~lic pressure<br />
(c) Gy electrical solc;7oi:!<br />
(J) Fnem:at!ca;I::<br />
1<br />
(11) Dcrinzj grolJnC tof cation . . .<br />
(3) Turning cff :,draulic<br />
f)rxssuri-<br />
(b) Installing a lof.kcut I<br />
Ltol t<br />
(c! 1i:stallicp a ;lround<br />
safety pin<br />
(dl i;cversinq breech<br />
SlWVCS<br />
23<br />
I The student is given about 30 seconds for each<br />
test frame.<br />
(i) Pylon k:ing sta:iom arc Then after he has completed the test.he will<br />
ntimixred : 1 then check the answers.<br />
(a) 1, 2, 3, 3<br />
(b) 2, 4, 6, C<br />
(c) 1, 2, 4, ;<br />
(d! 1. -7, s. 2<br />
c<br />
24<br />
.<br />
;<br />
159<br />
..I<br />
.)<br />
( HRL Media D/E Lab -2<br />
. ,..<br />
. ..- .<br />
. -.<br />
.’<br />
.
(S) The carrying hooks...<br />
25<br />
(cj By electrical solenoid<br />
(d) Pneumatically<br />
(11) During ground o?erationc . .<br />
' (a) Turning off ;l:W,dratilic<br />
pressure<br />
(b) Installincj a lockout<br />
bolt<br />
(c) Installicn a ground<br />
eweversinq kt-cccil<br />
jsleeves<br />
26<br />
27 ::<br />
PSM 6 Multi-meter ,:ith red<br />
ead beinn inserted<br />
. .<br />
e<br />
:I:;<br />
J<br />
I<br />
‘.><br />
M~ CARREL INSTRUCTION - SPANGENBERG<br />
AUDIO-V1SU.y 9'0RY IX?RD luml..frr<br />
(;<br />
z<br />
The sound track t;o:rld say the answer is (c).<br />
During proJnd operations the accidental release<br />
of loaded stores is prevented by installing a<br />
grourl safety pin.<br />
Fo<br />
to<br />
WI1<br />
Ilowing iach self test the student is invited<br />
ask the; instructor to explain any confusion<br />
ich he zay still have.<br />
I<br />
Mr. RaiFord, anot;,er Lowry instructor with whom<br />
we have/worked, developed a refresher module for<br />
making<br />
i<br />
'continuity checks with the PSM 6 Multi-meter.<br />
Using a mediated nrogram and a special board in a<br />
performance carrel, the student was walked through<br />
a PSI? 6 continuity, check.<br />
160<br />
HRL Media D!E Isb -2
:<br />
., :, .I .,<br />
, , ., ., .:.<br />
:<br />
.:.<br />
.<br />
.,.<br />
:.<br />
.,:. :,. . .: ,. :.<br />
,: .:. ., .: .<br />
; .<br />
. .<br />
:.<br />
.,<br />
_..... :;?, j. :y...::<br />
:,,: .._ ..::..;.:‘. ,“S<br />
,; ._.;_ :: . . ..I ,....<br />
:’<br />
‘y,‘:y;:i .‘K,<br />
., :,y::.:..<br />
.‘::..:: ‘. :<br />
,.:.,, .: ..:,. ‘. ‘...I<br />
.:,. :.,:<br />
.<br />
:.,.<br />
,_<br />
,,<br />
.‘. ,.i.; .,/. . ::y::,; :,:. :.:.::.<br />
‘:, :,:.y, ;:..+.::.::.:<br />
. . . . . .. ,...<br />
:‘; ._.. ,:.;,,: .:.: . .<br />
. . . ..y..:.:.: ‘:.I<br />
1,:<br />
. ,, _.<br />
1,: :,: ><br />
‘:), ,.::.‘y: . j:‘j,::::. ,:;.p.<br />
‘, .,..<br />
.<br />
.‘. .:y: : ” ..:; “.” ”<br />
.,,... 1.‘:. ,::.:::,. ..,:.: :v..y:‘...<br />
. _;<br />
::<br />
: . . . .<br />
Student performing continuity '.i;t:<br />
f check with Make/Break switch 1':<br />
open.<br />
c<br />
29<br />
; ;:<br />
.:sck with Ibke/Break switch<br />
:losed<br />
30<br />
inger pointing to wires on<br />
ontinuity Check Inset-t Goard<br />
heck on wire 6 showing PSM 6<br />
i:~~<br />
3 2 I:, 1, :::~,,:; :: ;'1 $$;<br />
. . . . . . . .I ..: . ..&., ., :,:<br />
. . . :. ..: -'::.::x,:i.; ._ ,.<br />
- -.<br />
t<br />
J<br />
i CARREL INSTRUCTION - SPANGENBERG<br />
ALKUO-VEUAL STORY KMRD -/l-r<br />
The student was shown an open circuit check<br />
which he then performed.<br />
Then he was shown a closed circuit check which<br />
he perfomed.<br />
After both open circuit and closed circuit<br />
examples were shown using wires, the<br />
'_ student then performed additional continuity<br />
: checks. The results of oroper checks were show<br />
.:: him. Successful comoletion of the program<br />
;. required the self-evaluation designed with the<br />
program.<br />
161<br />
:_<br />
..:c .;: SRL bi$x?ia D/E Lab -2<br />
-. ,: . .<br />
. .<br />
_- ._. -. __ -, . . .._ _ _ . _ _<br />
b<br />
.
:: . .<br />
i: ,:>::. . . / : ,. :... . . :.::.- .:<br />
.y: ,, .:.:_:.. : ,I, :<br />
wnnLL A 1q.3 8 hyc I . vtI “. . .,.Y_..__. ._<br />
: ,.<br />
‘.<br />
: .+,<br />
:. .,.<br />
;; ',:<br />
.."<br />
L '.<br />
. . ,....._.<br />
.:’ .= ::<br />
.' :.., 1<br />
AUDIO-VIZ&U SIURY BJiWD AlNRL/Tr<br />
~ENSITOMETRIC 8EASUREIKNT In another area a program was developed by<br />
Ron Filinger on densitometric measurement.<br />
:<br />
33<br />
26. 21 STEP b/ET&E - 11/11 I<br />
I<br />
Final frame in Oensito~etric<br />
I*leasurer.lr?nt Program. The<br />
student will mcastlrc 11 out of<br />
11 steps on a 21 sten wedge.<br />
I<br />
I<br />
34<br />
Sensitoiiictric Strip No. 0395<br />
35<br />
I<br />
After the student is shown the correct procedure<br />
he is given a sensitometric strip on which he<br />
performs densitozetric measurements.<br />
ensitasetric Strip laying on .The s&dent records the various readings which<br />
ensitmatric ilcasuru:;ent Answer arc then checked by an instructor. We feel that<br />
heet :evaluation should be integrated into each module.<br />
36<br />
:<br />
. . .<br />
. .<br />
'I: 162<br />
. . ,.,.I$...,, ..I.$<br />
: HRL Media D/E Lab -2
:ARREL INSTRUCTION<br />
QMLITY CONTROL<br />
MO@L;LE VALI3ATIOZ<br />
;tudc..t working at standard<br />
:arrel<br />
!r;t;:r~c:or focusing !/c?' video<br />
3l3"k? 1.1 r-3 . :.:A:.!- 1 ;<br />
30:2 Rack. Pro?! 2~71 cues kritten<br />
3n ~311: board in tack:1round<br />
39<br />
S!id?T<br />
-<br />
: I<br />
1r:s '.ructor shooting Sony Ravel<br />
l/2" bide0 camera in developing<br />
XXl ,122 bomb rack !iodllle<br />
39<br />
43 ‘_<br />
- . .<br />
‘I<br />
':I CARREL INSTRUCT!ON - SPAKGENBERG<br />
AUDIO-W+M STORY' BOARD m/I-r<br />
Learning module validation will become increasingly<br />
objective. KoduJ& will be systematically evaluated<br />
to insure optiaal lc3rning. The area of module<br />
evaluation will probably exercise the innovativec ;s<br />
and creativeness pf military testers to the<br />
greatest extent. f<br />
i<br />
In module develognent and validation we have been<br />
experimentally zlaring technology in the hands of<br />
erqerienced instructors. I will show you two of<br />
Yit? techniques we have examined. The first<br />
technique is dynamic storyboarding using l/2 inch<br />
video tape.<br />
A3 instructor shoots the desired perfornLnce of<br />
the joblbeing trained. After rev;e\< and feedback<br />
from ot)ler instructors, and possible revisions,<br />
iI<br />
the &deo seqrrence is shown to a small number<br />
of s&dents. These students are _<br />
163<br />
HF.1, Media D/E La% -2<br />
-_ . ,<br />
-<br />
.-.. . .<br />
_<br />
. .<br />
I
Ji<br />
&“l.:~<br />
:<br />
i ;<br />
r<br />
t<br />
.,... : : ..,.,<br />
: :,. ;: .’<br />
,, ::.'<br />
.:..<br />
iVALUATE<br />
:<br />
41<br />
. . ;,,.. J. ..:.: ./. : bmNt~~ A~V~B nut., AUII - 3rtwutwxnu<br />
: :.<br />
( . ,::i . : . .<br />
,., .:. :. . . .. ALIMO-VI.SUAL SMRY BOARD<br />
'. . . . . ,.._.. : .: m/l-r<br />
:.;.<br />
Instructor evaluating student<br />
ilerformance<br />
on Xii 12 bomb rack<br />
nodule<br />
ISHOOT<br />
Instructor develcr~inq :'Jl module<br />
is sltooting with Irlsia-matic<br />
camera<br />
42<br />
SEQUENCE<br />
Instructor seouencing slides<br />
on multiplex slide sorter<br />
43<br />
H01.J<br />
nstructor projecting slide /<br />
ram M 2 Iqodule for students<br />
44 -<br />
.<br />
- :i: i. _ - : HRL Media D/E Lab -2<br />
then taken to the equipment and their performance ir<br />
evaluated. Any difficulties or confusion on the 1<br />
of the student would necessitate some revision of ~#IE<br />
storyboard. kJhen the module is informally validated,<br />
the media format for student learning is selected.<br />
We do not recon?nend the use of l/2 video tape as a<br />
primary teaching media but normally would convert<br />
this dynamic storyboard to a prenarrated slide or<br />
Super 8 oresentation. Revision or validation is<br />
completed before any production costs are incurred.<br />
\!hcn the instructor learns to develoo modules which<br />
teach visually, another storyboard technique can be<br />
used.<br />
Using am insta-matic or 35mm camera he can.stioot<br />
selected critical vicars.<br />
Then when he receives the develooed slides from<br />
the photo lab he selects and sequences a set<br />
of slides portraying what is to be taught.<br />
tie will t!len present the slide sequence to<br />
students - developing also the narrative<br />
associated with each visual.<br />
164
">..<br />
.<br />
.:<br />
. . ., :, ., . . . .<br />
.: . . :.. ,:i<br />
:,:<br />
: ., ,' ..,'. ; :.. 'j . . ',;,,;,; . ..;. .: .,.; : "<br />
.: .,. .,<br />
..'.,,.'<br />
'.<br />
:<br />
. ',.. . .<br />
"._.. " CARREL INSTRUCTION - SPANGENBERG<br />
: ‘:'.':,..::::.: ..:.; ., ','<br />
,, ..,.;.:. .;,. .-.,<br />
AUDIO-v1suAL SroRY KMRD<br />
:. . .<br />
".<br />
Alm/Tr<br />
-<br />
SYNTHETIC JOB PERFORMANCE Evaluation of synthetic job performance can Fake<br />
many forms,<br />
Student operating switches<br />
in LT38 front cockpit<br />
49<br />
Ft Carson, Colorado<br />
Video camera setup irhich records<br />
panel during Vulcan I~lcapons<br />
System operation<br />
50<br />
IQUALITY COZTROL<br />
tudent performing arresting<br />
uplock assembly check on<br />
ELIAGILITY<br />
tudent making continuity check<br />
n performance carrel<br />
52<br />
i<br />
For example at Ft. Carson, video tape is used to<br />
evaluate the perforflance on the Vulcan Weapons<br />
System.<br />
Quality control can be built into many modules.<br />
The instructor is able to make a performance<br />
check on each member of the team of four students<br />
who are learning the A/C preoaration procedure<br />
for loading an AIM 70 guided missile in this<br />
quadraphonic sound module.<br />
It seems quite important that the reliability,<br />
-, : .., 165<br />
- . . .;:, ..:<br />
. .:.' . . :' HRL Media D/E Lab -2
:<br />
._<br />
EVALUATE<br />
Instructor evaluating student<br />
performance on Mi 1 Module<br />
45.<br />
ARREL INSTRUCT102<br />
QUALITY COIlTROL<br />
tudent working at standard<br />
arrel<br />
46<br />
'SYNTHETIC JOB ENVIRONMENT<br />
Student sitting in front cockpi<br />
of LT38, a photo mockuo with<br />
some live switches of F4C<br />
aircraft<br />
47 ..<br />
..‘.<br />
‘....<br />
‘.<br />
.,, -,<br />
:<br />
,. . . .<br />
OCKPIT PROCEDURES<br />
tudent at dummy rear cockpit<br />
48 :.. : ,: .y, ..:<br />
. .<br />
. . '. : : .<br />
:<br />
I<br />
:. LMRRCL 11‘(3lKULlllJN - SPANGENBERG<br />
AUDIO-VISL& STORY BOARD<br />
I<br />
I<br />
/'<br />
-/?T<br />
: Students can then be evaluated and the storybbar<br />
validated..<br />
I!<br />
The third area of increased emphasis in evaluation<br />
related to carrel instruction is quality control.<br />
Quality control of critericn performances will be<br />
increasingly emphasized. Student sampling and<br />
extensive s%udent performance tests can insure<br />
the ubjectivity and validity of student performance<br />
checks.<br />
Synthetic j’ob environments can be readily<br />
created for, comparative purposes. The use of<br />
synthetic performance tests, such as<br />
this cockpit procedures check, should increase<br />
because of their potential ecoraomy.<br />
166<br />
!J<br />
Y' 'HRL Media D/E Lab -2
.(<br />
..,<br />
,.I .A. : .:: ,.: . . . . . . .<br />
. . ..:. :y’::‘):::. . . .j::, j.: ,: ,, . . :.<br />
:i<br />
‘. ‘.,<br />
.'.. : ,:.<br />
:.:,,2<br />
.: .._. :<br />
.?;<br />
., .:.i:. ::. ., ” :’ .’ .:,:!'i: . . .<br />
.':'f., :Y' .,:,;. _,... .:.:I, ; ,.:..I :: [..:,;j,)..y. ~. ', ;. :<br />
,,: 'j.<br />
. . . . .<br />
:: 1.. . . '..<br />
..y’ .:... I :. .:.;:..: .:.. . . . . ‘y’:. :,<br />
. . : -. _: ;,:j$~~. . . . : ,,<br />
.,_ ,.., :::., .+ ., ., .( : ,. : ..’ . . .<br />
.j .:,. ,., .,: _;..:<br />
,. ,.. ,, . . . .y,. . . : ..:. : :i' CARREL INSTRUCTION - SPANGENBERG<br />
AUDIO-VISUAL. S-ORY BOARD<br />
,‘. .,<br />
. .<br />
mm<br />
'RELIABILITY<br />
valjdity and<br />
. VALIDITY<br />
:.. ,.<br />
'<br />
Student making continuity check<br />
in performance carrel<br />
53<br />
RELIABILITY<br />
VALIDITY<br />
OBJECTIVITY<br />
Student making continuity check<br />
. in performance carrel<br />
54<br />
4 Kose Gun ammunition handling<br />
ysten.<br />
ointing to link<br />
55 " ., .; Y,:::..;:. .I" "j. ; ,:<br />
'.<br />
..:<br />
INDI'JIDUAL TEST<br />
PERFORMMCE-<br />
. .<br />
Instructor testing student on<br />
4J 1 parts location<br />
56 ,:: .,.;';, " _'<br />
. . -' .: ._<br />
: .:. :::. : .: '..' ..<br />
i ,<br />
, :<br />
_’<br />
“T<br />
:<br />
‘/<br />
objectivity of all tesis be established through<br />
student cerfomance testing. An example of<br />
what I mean is now being conducted at Loiqry AFB.<br />
2e are askinq the quest<br />
test can subititute for<br />
certain kinds of tasks.<br />
of instruction we are c<br />
testing.<br />
on of whether a picture<br />
a performance test in<br />
Using the same modules<br />
moarlng two ways of<br />
Following exposure to the learning module<br />
i .. Sgt Osborne takes the student to the actual<br />
I ,.. equipment and asks the student to show him the<br />
i selected items. The other way of testing pcrmitS<br />
I group testing using pencil and paper.<br />
.:<br />
167<br />
'. 4, . . .' 2 HRL Media D/S Lab -2<br />
.z. - -.<br />
/.<br />
;/’ ! . -.<br />
.<br />
- -. .- :<br />
;<br />
. .<br />
.
_ 7.. ;,;,,-;;,T-. :,;,,-;;,T-. ..’ ..’ :. : ,,.I,( .,,.:., ..,;. c:.:.:.:::“.:~.. :.:.:::“.:;-i ,,., ,,., .,>, .> -_ -_ ,__ ,__., .. - ^ . . . . . . . . .<br />
..::;‘:::::. . . ._ ..~)x::~~:‘.::i.i:,::::: : .5 . . . . .:.:.. .: ..:<br />
i,. I *y.:y : .:.:>y : ..:<br />
:. . . ,.._. : .I.. .:..: .,. . . .., . . ._. .,._ . . . .<br />
: ::, .j:;;.:‘,: ,‘.> 3 j::,:..~.:::;:;-: :.pc ,,;:..: ,.,:: ._: ._: :, :, . . . . . . . . . . . ..y . :.:,: .+:.:.., .:._, > :......: ., :.,. ,.. .; .; . .._>,.. ..>... ‘i-y ‘i-7 :?.‘:.‘x..: :y:: :.::::<br />
,,,,: .;,.: >. p< ~z.‘.:.:.: _: ‘; ,. ,. .;. ,, .,.,,,, 1: ..::“:.:::‘~f :...:.:, :...:.:,:. :. ? . . . . .<br />
‘. :, .,:.>:;: ..> -,“::l;i:i:‘i~~~~::: .::::. .I:. ‘f ,: :$,:: ..: ,. . . . . . ..L..... .._, .,.,., :.:.z+:.:...:::.. i,,:.:,,,,, i::. ..:__,. ,_ . . . ,...,,:.., :y:: : ,, : . ,.:. ..> .,<br />
‘-‘i ‘.‘i .‘. . . . . . . . . .:. :: :....: :..,.: .,.,._ .,,,._ >,.. :.:...,. : .. . _ ,.,:.,, ,,,:.,, ,,, ;:..;.:“.. ‘F, :. ._.’ ;.,~‘,,.+f.~ . . . . :;.:; :.:,...<br />
: I.‘.: ..::. ‘...i ‘..i.. ::.:. ::p::.,,:. .,..,. ,,. ::, ,, .,,; .,:..:: .; :,> .,. .._. ,. : : ., ., :, . ,,<br />
::: .: ; .::,::ic .:::: ‘.. ,. :::’ ” ‘.‘.‘.’ “: ..:,<br />
: :.::y,: ? : ,. ,., .y::j:::: ‘._ .., ..‘, ,,:., .::.:: I . . i : ‘:>.,.: _.., ,:.Ll. : ., ..,<br />
..... ,- .:,. ;:: ._: ., :<br />
‘_..<br />
.; : :.: :.;, :,:: :: .<br />
,.,a.,<br />
. . . . . . .<br />
::<br />
.,.<br />
j;.::.::<br />
,..... ._<br />
.‘.<br />
,.<br />
‘. ,. :.:,, :;. ..:, .‘I. ,,; ,,:....,: _...,: :::y: ;:..:.: ..::.<br />
,, ;:. : ,,::..<br />
‘,::.:yy::. : ,;: . a:: ..‘j ,,:::: ,,:y~::,:,.::;:,., .::,:,.::;:,., 7.:’ y.,” ‘-: “: ,._,, :..::,_‘:.x:,.:,f ,.- : ; ‘, .’ . . . . .:.?: :‘..‘.“’ :,: :<br />
.;.,... ..::. ,, ,,: ,.: .:.:..> I. .,>> _..;,:,i:.,.<br />
..: . ..v.... .> ., . . . _. :.:,. .._ ,‘.’ ‘r’i. f .f .., .:j:. :, ::...<br />
:.;:::: “..z:;> :,. .“. ,’ ,_ .:.<br />
.:::.::‘;:~:i:::~::. : ,,: .,: ,._ : .._ . . : . .’ ..,.<br />
/. ..-,:<br />
:: :.: i.,, :<br />
9 j<br />
” .;. .;<br />
:.!:‘::;z;:.: GROUP TEST<br />
. .<br />
‘_ ..i<br />
_, :<br />
,_<br />
.’ ,.<br />
1<br />
:<br />
.<br />
57 ‘.! :. ,’ .‘.’ ;’ ‘, ‘: : j,<br />
: . . ,.<br />
; :. ..: . .<br />
1<br />
I<br />
CARREL INSTRUCTION<br />
I<br />
Student !qorking at standard<br />
carrel<br />
c<br />
58 '(<br />
'CARREL INSTRUCTION<br />
:<br />
.<br />
:’<br />
.<br />
: QUALITY CONTROL<br />
:.y. .: . * :<br />
: ..: :::. :<br />
1 .I..<br />
,c.<br />
j ,.,. ‘, ARREL INSTRUCTIOJ<br />
.<br />
..:.-:<br />
:, .‘,<br />
,..<br />
..,, : .,# .:,'.I-<br />
‘c” 1”“” UALI TY TY CONTROL<br />
:, ::. :. '.<br />
.,,<br />
i.:.,<br />
;> IODULE VALI DATIOR<br />
.<br />
:'.<br />
' tudent working at standard<br />
arrel<br />
:<br />
.: ;<br />
:<br />
CARREL INSTRUCTION - SPANGENBERG ',<br />
AUDIO-VISUAL mRY BOARD A.FbYRL/Tr<br />
The student is shown a picture of the item and<br />
.'.i;:;;.:;; .:.. :.,. y>, -PENCIL PAPER-<br />
asked to select the correct name. These ways of<br />
.' ., . . . .j: .' : testing are being assessed as to whether they<br />
; ::)~;:~~ : :<br />
: can be considered equivalent,<br />
I: : ..,.,. .<br />
..,<br />
. . :.<br />
..:, .:,<br />
,' ,’ .'::,J- ._ : Instructor testing students on<br />
..: &I I&I 1 1 parts identification<br />
:)<br />
'.-.:t<br />
The USC of carrel instruction appears as an<br />
opportunity for military testing<br />
to provide better quality control of student<br />
performance,<br />
.'-: to systematically validate learning modules<br />
. ..i<br />
168<br />
HRL Media D/E Lab -2
z ‘.<br />
,.....<br />
,:i ,.,: :,<br />
i : : :::::’ ‘ij::‘- ..y. :;:; .:i: :.. . ..:..':+$:::'. :::r:‘ii:~~~:i;i;::,‘;:‘:i . . . . ..~.~.‘.-.:..:.~~ .., .>:‘: .A.:.:...: ,_:,.y::j. . . :, .,, .: : ., ,.::,. :,::. ;' .',,. :, " :.,, T'::. . .!,i.& . . ,: . .:::.::.: 1: ._.: .:_ ..‘.., ‘, . . .<br />
.' CARREL INSTRUCTION ,' SPANGENBERG<br />
i : : ,.,::, 1, :_:.~ ,.'.:.'::: -; i.......:. '.~:a;i:~,.'::::<br />
,,<br />
':...<br />
: .. .: ::. : ,.::: .'::'y$. y \,:::, :. :': .~ :, ..:> .,<br />
; ,I;.;. ., . :, '.'.,‘:. . _,, . .: ,,:, .:,, ':. ,, ,( . ;., . ..,j::.:. I..:-<br />
:: .' :',;:::, ..: :. :..':<br />
> ,_' :.. ~UDI0-VISUAI.;S?DRY KlF<br />
; i :. -. .:,:, : :: ,:: ,: ;,.,.,.:... ,..- .:.,:, ,. ,, ;::$ .: .:,. .y,:. :.;: : .:... ,.; :.;:i ::.:. ..:;.; ,... ':.:.:, ..'.L .., ;. :: :: '..' ,. ,,. ., .,, :, ,,; :yj:' :.:: . . :. (.. :.:, . .._. .::_ :.. :;:,".'..,. .:... .__ 2 :,:,__ .:. . ,.: ..(. y:' ;. " ., : .j ., .,<br />
/<br />
. '. ,:..<br />
AmwIT<br />
. I<br />
CARREL INSTRUCTION and to increase Iceming effectivolcss. '-'<br />
:. :,.::i: QUALITY CONTROL<br />
5, ,:j::.<br />
::y. ;, ::<br />
.;:,,: MODULE VALIDATION<br />
.'<br />
. . .<br />
'.. :' '. LEARNING EFFECTIVENESS<br />
:<br />
t working at standard<br />
,.,: .( .; . . ..'..<br />
'.. 6 1 ,,, " ':::,: ';:;.I; :;; _ "y ., .' "<br />
::. ,,, . . . . . . . . ;,.'<br />
'.'.<br />
AFHRL<br />
TECHNICAL TRAINING DIVISIOi:<br />
MEDIA DESIGN & EVALUATIDX<br />
BRANCH '<br />
62 ,‘.<br />
..:<br />
. . .<br />
. .<br />
, -.<br />
I!<br />
169<br />
I<br />
r<br />
t<br />
t<br />
1 HRL Hedfa D/E Lab -2<br />
- .<br />
, . .<br />
‘L.’ ,_.<br />
. .<br />
I<br />
i l.<br />
i i
PROPICIE::CY KEXS:J.Scr*.-. """UT ID FLIGHT SIWLATOhS<br />
Edwin Cohen, Ph.D.<br />
Slmulatlon Products 9lvlslon, The Slnrer Company<br />
Mnrhacton, ::ew York 13502<br />
An lncreaslnr crOnCrt.iCn of the Work of mlIlt3ry personnel<br />
Involves partlcl:eti-n in man-machine systems, ;UCh aS<br />
aircraft, txks, s!-.l~)c, xfssiles, and command and control<br />
systems, t.kat are xe;r7~> le tc rezi-tic?, mm-in-the-1ooD slmlatlon.<br />
in sue!: r.17.ulntir.n . 1llust.rnt.e~ In Tir,u-e 1. the man<br />
1s located ln a renlfca cf kls ncrmnl work statIon, and provided<br />
b;lth the sa."e ::tir.lrli -- Vis’d2i, aural, grO?rlOceptiVe -that<br />
cor?nrlse lr.r~t:: tc r.1.m durlnrr combat or otl,er operations.<br />
lie reacts to thef.e stl:2i 1. renerntlnz suitable outputs or<br />
Tenponses -- contrcl movements, switch actuatlcns, verbal communications,<br />
lor entries --; the simulator, which cenerallv<br />
Incorporates a dlrltal ccxuter, receives and acts upon these<br />
responses to modify the stimuli on a real-time basis. Some<br />
slm,ulatlons are deslrnod Por crexs, teams, or units, rnther<br />
than olnrle :nc!lvlr!uals. car some command and control or mlsslIe<br />
control slrulatlcns, tL-e operational 2;ork stat!r,n !r<br />
used, with sgnt?.etic rather than real-world Inputs L v~I., -<br />
plied. This Is sometimes referred to as stimulation of t..,<br />
display system at the work station.<br />
Posltlons amenable to simulation, although reoresentlng<br />
a small. mlnorlt:: of arme2 forces personnel, contribute dlspropontionatelp<br />
to the effectiveness of each of the armed<br />
services. Eecause of t5.e crucial nature of t!:cse pcsltlcns,<br />
substantial efforts arc Invented in the selectlcn alld traInInK<br />
of men to fill them, anti ln the assessment and verification of<br />
their combat re?dlness. ~lculatlon plavs a lnrre and<br />
-<br />
lncreasing<br />
role in tralnln~., ?cr alrcrn:t pilots, simulator trzinlnm<br />
Is far less costlv tK;Pn .a . tralnlnm<br />
in the alrcrnft; for declalon<br />
makers In command and cor.trol systems. it AIIO';IG practice of<br />
combat sltuazlons not ct?.ert:lse available In pe:icetlne. SirAlators<br />
are also used for sue!; civilian positions as afrllne<br />
fllF;ht crews, nuclear r,CXf?r nlant operators, air traffic controllers,<br />
and automobile drivers.<br />
Simulators have been a locus of measurement for a variety<br />
of purposes, Indicate= tv the follo:~llnF: sampllnr , xhlc'n Is<br />
meant to be ll:Lustratlve rather than comprehensive:<br />
1. Selectl%n. AE??i,'s Personnel Research DlvlsZon has<br />
a procram under I:?? to r?etcr- ..lne t!.e utility of the Slnrer<br />
CAT-1 simulator for selection of pilots for underGraduate<br />
pilot tralnlnrr.
2. Tralnlnc Feedback. All current tralninz sIm. '.<br />
have provisions fcr .Curnlshlnc the lnstructcr wltn dzt;" .:.<br />
trainee performnrce. Soze , such as the ASUFT, a 1' ~. sls:<br />
tor to be used by 5% at Xllliams &?3 for trzlnlnf? i~3e;??'6r<br />
and the 2924 a slzclatcr of the LF-1 helicc3tm usec for<br />
transition a;d lnstrnzent trnlniw zt Ft. %c%er, fu~.fsk ~b<br />
data directly to the trainee in th.e dockplt-<br />
3. Debcrrinin- the ??fec% of 3vircnrentn1 V,?rl?bles - -<br />
on Human ?G?or7-zncc. k.r.2~ ies kere incluke t:".e effect, of<br />
spncecrz:'t-Jl;
generally t!lcse In which the tester Is comfort&bly superior<br />
to the testee. 'I'hus the basic methodolop of personnel testing<br />
and measurczent has developed around gaper-and-pencil Ins truments<br />
to the rolnt where no one thinks It Inconkruous that<br />
"Tests and 4casurements" textbooks devote upwards of 005 of<br />
their space to paper-and-pencil tests. t<br />
2. >!u!tInlIcltv of Yeasures. %'hen a osychometrlcian<br />
----_<br />
finally de?i,:e:; to racker .h.r:,nn rerformnnce'data on a slnulator,<br />
he su:':‘crs an embarnsscent of riches -- scores or even<br />
hundreds of !r.easures are available, all bossessinrz face<br />
validity. ?I:-ure 2 shows data obtained hurlnc the lendlnrr of<br />
a 707 slmul:ttcr; these items reyrcsent a selection from a<br />
much larpzr n::aber of measures available.<br />
3. L!1Ck of Parallel Items. in a landInz, there Is just<br />
one Indicated alrsreeti at touchdown: to pet a parallel Item,<br />
another lnn,llnr must 'ye conducted.<br />
.<br />
This lack of parallel<br />
Items wIthIn :: riven crtneuq:er contrasts wit:? tke easv avallability<br />
of p3rnllel Items for most paper-and-pencil test<br />
domains.<br />
4. I~I?:‘Icult~.@ ?n Comblnlnr Item Scores. With naner-<br />
.-A.-. ---_____~<br />
and-pencil tests, the ncmter of correctlv ansirered Items<br />
usually s e rv c s 2 s an cverrlll or summary ne3scre; occasionally,<br />
differential &Ir-htln- Is emnloved. 031:; rarely Is as coc?lex<br />
an al~orlt~;:~ as successive hurdles used to derive a summary<br />
measure from Item scores.<br />
1<br />
Nuck more connlel al~orlthns are reculred to<br />
characterize Froficiency In tasks such as landlnr: a 707. For<br />
that landin;: task, Items such ns Indicated airspeed, sink<br />
rate, distance from runxzv threshold, distance from runway<br />
centerline, r.r.c 3lrcr3ft attitude (Ditch! e keadlrz, and roll) --<br />
all measure4 r;.t tcuch-33.n -- cannot L?e e aluated Independent<br />
of each ot%** .L ., as Darer-and-nencll test btems are ceneraily<br />
evaluated, Put r.:lst !'f cozSIned, usln;r 7 rather complex, yet<br />
to be developed nl~orltt-.n.<br />
5. In!?rn?IcabIlIt~- of Conventional Item Analvsis<br />
Techniques?,tcz anaivs:s Is most >iItiqlly usec for muitiplechoice<br />
Items. Kach multiple choice item renerally recresents<br />
a separate kc!zr?vIcr snmnle, and the alternatives are discrete. -<br />
When, as they often do,'ceasurements on a simulator cover<br />
various nspccts of a sIn;?le behavior sample, rather than separate<br />
behavlo? sazoles. and use continuous scales, rather than<br />
discrete altcmntlves, conventional Item analysis Is lnappllcable.<br />
6. p~~~v?Ila!~IlItv for :*!ensurement of Pe.h.avIors :!ot<br />
Directly Hc!‘Ic.?tc:! In ‘.:?chlnc Innuts. in the simulator, the<br />
����������� ��� ��������� � .~n--??cr. ?r: t'-e-* . . effect. the machine, are<br />
;,xC?::rL' .; c t.-\ :'P-..':;:'C'.cI:t , :.1:t t!.e :':':,::':'.:e:- .- -_- :!*? * -..:!:ic.'. t?c-c<br />
172<br />
I<br />
I<br />
.
Inputs are made are not. For example, in a flir,ht simulator,<br />
Ii we would norm:,.?ll:: have no k'sy of knowina whether the pilot<br />
used an efflnicnt visual scan oattern in checking instrument<br />
readlnm. T!:e snrne cS’cctive ocrformnce could be achieved<br />
by two pilots xlt5 ver:.lnifr'erent instrument scan patterns -one<br />
usinK an cfficl.ent pattern and thus minlr:.lzin~ his work<br />
load, the other compensatlnc for a less efflc.lent pattern by<br />
worklnr harder. Xhen dcall,nr irlth n sinulaton that enables<br />
measurement.of z2ch a 13r;e nur.Ser of aspcctn of vehicle or<br />
system perform,?nce b:: merely speclfvlnf? the parameters to be<br />
measured, it i:: all too e3sv to for17ct about those aspects of<br />
human perforcnnce not directly reflected In machine Inputs.<br />
DS shiftin:: our ennt;nsls fro3 paper-and-?cncil testInK<br />
to the kind of perfor:?nce tests for which t5-e SlxIUlztOr is<br />
such -in appronrizte vehicle, we would qo far toward ceefinr:<br />
the objectives of crltlcs of our current testinK posture,<br />
such as PlcClell?n:! , ~50 tells us to test for competence,<br />
rather t%n intelll~ence, and t5:nt "t5e best tcstlnr: is<br />
criterion s~.cnlln~" !YcClelland, 1073, p. 7) and O'Leary, who<br />
advocates the "';‘:-.e ad*ziantnres of job simulation aptroacfies to<br />
selection over techniques lackinp content validity (O'Leary,<br />
1973, P. 149).<br />
In conclusion, :ie have a lcn:: :qny to TC, ln exploiting:<br />
simulators IS ce2surezent tools, before t:elr one basic<br />
weakness -- innt:6illt:: to zensuye bc!:avlor net directly reflected<br />
in machine inputs -- becmc?:; a truly llnltinlr factor.<br />
By refusing: to tc s!-iackled to psvc!:ozetrlc practices appropriate<br />
only to paper-and-cencll tests, we czn exploit this<br />
powerful tool to zeasum a wiee ?nnpe of Individual snd team<br />
behavior, obtalnlnrr relevanti relin!)fc, valid, and socially<br />
useful data at rr.odest cost.<br />
1
REFERENCES<br />
1. McClelland, D. C. "TestInK for Conpetence Rather Than<br />
for tIntelllgence'," American Psyckolorist, 1973, 28,<br />
1-14.<br />
2. O'Leary, L. R. "Fair Employment, Sound Psychometric<br />
Practice, and Reality. A Dilecma and a Partial Solution."<br />
American Psyckolorlst, -- 1973, 20, 147-150.<br />
.<br />
r
s<br />
iJl<br />
-STORED PROGRAM<br />
VEHI CLE DYNAI.11 CS<br />
SYSTEM OPERATI Oil<br />
DEt4OilSTRATIO~iS<br />
PROGRAlZlED I i;STRUCTIO:I<br />
PERFORM&\r4CE t.!EASUREMENT<br />
STIMULI<br />
' FIGUP,E 1, FLIGHT SIilULATOR CLOCK DIAGWI<br />
COCKPIT<br />
RESPQNSES<br />
INSTRUMENTS STICK<br />
INDICATORS PEDALS<br />
MOT I ON THROTTLE<br />
VISUAL LEVERS<br />
COiITROL FEEL SW I TCHES<br />
Al1310 VOICE<br />
INSTRUCTOR STATION<br />
D!LLSUUS !iLw.RD.S. ugL!aLs<br />
-u _<br />
INSTRUMENTS X-Y RECORDER SWITCHES<br />
I t4DI CATORS STRIP CltART KEYLiOARD<br />
TAnULAR CRT RECORDER<br />
GRA?l{IC C R T TELETYPt3RIlER<br />
AUDIO LI;4E PRINTER<br />
MAG TAPE<br />
J<br />
.?<br />
1. --
i<br />
ILS LANDING WC1 RiL”l 3 6<br />
INITIAL COND R&F DATA<br />
OAT 6.1 DEC C VREF 128 KT<br />
BAR0 . 29.91 IPJ HG VUCA 117 Kf<br />
VlND DIR 36 DEC AL? 2602 FT<br />
VfND VEL tt XT LAT 39. t 230N<br />
TUR8 0 LOUC 94.8286V<br />
WY ICE 0 IN<br />
GY 206364 LB<br />
cc 30.4 t nix<br />
DEC EL<br />
STD TOL DEV<br />
FLAPS’ 2s DEGREES<br />
IAS 140 KT 5 3<br />
ALT 2582 FT 2600 SO 0<br />
ALT DEV -0<br />
LOC INT<br />
ALT DEV so 050 0<br />
UAX ROLL L21 DEG ~30 0 0<br />
LOC TRK<br />
LOC DEV RT .lDOT 0 1 0<br />
DEV LT .I DOT 0 I 0<br />
CROSS 0 0<br />
DEV WS 0 DOT NA<br />
IAS MAX 147 KT NA<br />
MlN 140 KT 0 -3<br />
ALT b!f:J 2620 FT 2500 -SO 0<br />
UAX ROLL L2 DEG ~20 0<br />
PITCH XAX 5 DEG k’:;\<br />
. UlN 2 DEG NA<br />
GS INT<br />
GEAR DO’.Y<br />
CS DEV -1.7 DOT -2 1 0<br />
LOC DE’.’ ODOT 0 05 0<br />
IAS 142 KT s 0<br />
FLAPS 4 0 DEGREES<br />
GS DEV -t.2 M T - 9 1 10<br />
LOC DEV 0 DOT 0 05, 0<br />
IAS 140 KT 3 0<br />
Fl.APS 50 @fG.RIS<br />
GS DEV -.I DOT 0 .3 0<br />
LOC DEV ODOT 0 05 0<br />
IAS 135 KT 5 0<br />
ALT LO?! 2497 FT 2500 SO 0<br />
’<br />
G!j TRK’ TO E!M<br />
GS DEV tit 01 DOT 0 03 ‘0<br />
DEV Lo .lDOT:O .3 0<br />
CROSS<br />
I 0 *o I<br />
DEV R?(S ;1 DOT NA<br />
LOC DEV RT 0 DOT 01.5 0<br />
DEV Lf<br />
CROSS<br />
92 DOT � � �<br />
� �<br />
�<br />
DEV R.‘?S 0 DOT NA<br />
IAS UAX 133 KT 5 0<br />
MlN 124 KT 5 0<br />
UAX RC)LL<br />
t-2 DtG 0<br />
PITCH MAX 2.5 DCG NA<br />
PIIIJ -07 DeG N A<br />
SINK MAX 7 2 5 Ffwe999<br />
XIN 4 6 9 FRI NA<br />
es TRK<br />
GS DLV K .2DOT 0<br />
DEV Lo .tDOT 0<br />
CROSS I 0<br />
DEV n?fs .2 DOT fJA<br />
LOC ULV R T .l COT<br />
DLV LT .1 Dot 8<br />
CROSS 1 0<br />
DEV RMS .l COT NA<br />
IAS rva 129 KT<br />
t-l111 124 KT<br />
rst ROLL R3 DEG *S<br />
PITCK YAX 2.9 DEG NA<br />
n 1 N 1.1 DEG NA<br />
SXNK tcax 573 FPU~BOO<br />
KIN 371 FPM NA<br />
FLARE<br />
MAX RCLL<br />
PITCH ZlAX<br />
UlN<br />
SINK KAX<br />
XlN<br />
DEV RW KDG<br />
LAT DRIFT<br />
C/L ALIC‘I<br />
TOLJCHDOUJ<br />
IAS<br />
ROLL<br />
PITCH<br />
SINK<br />
ACT Sl’JK<br />
D&V R’;YilDG<br />
C/L ALlG:J<br />
DI ST T.5RESH<br />
THR POS<br />
GRD ROLL<br />
SPOILF?S<br />
DRAKI:iD<br />
RESPO!.‘S E<br />
NV CO:.“iACT<br />
IJV STEER<br />
R&V FY<br />
MAX VI<br />
C/L CROSS<br />
C/L DZV MAX<br />
C/L CZV R?fS<br />
UAX R3LL<br />
RWY USED<br />
RS DEG
Y Y Y Y<br />
b<br />
I<br />
I
TlIE PERSOXU'NEL ASD TRAA..T"IX:G EVXCXTION PROGRAM:<br />
A Workinq Program fcr Improving the Efficiency and<br />
Effectiveness Of Fleet Ballistic Xissile Weapons<br />
System Training<br />
Part I Scope and hchicvement<br />
Lcdr Clarence L. h’alker<br />
Central Test Site for the<br />
and Training Evaluation<br />
INTRODUCCIC'N<br />
USN<br />
Personnel<br />
Program<br />
In April of 1969 the Chief of ?:aval Operations issued an instruction<br />
which brought into beinq the Fleet Ballistic Wissilc Weapons System<br />
Training Pro.Jram. This Program was aimed at improving the effective-<br />
ness and efficiency of training in the areas of the missile, missile<br />
launcher, missile fire control and navigation on FBX submarines and<br />
I<br />
at their support activities. Specific qains: looked for were the short-<br />
I<br />
ening of the replacement pipeline, provisioni for advanced, vice<br />
I<br />
refresher training at the s?ip's homeports ynd standardization of the<br />
advanced training available. h built-in part of the FEY Weapons<br />
System Training Program was a monitor Ca!.le<br />
the Personnel and Train-<br />
ing Evaluation Program (PTEP). The proqram:<br />
4<br />
"provides the organization, procedures, /and responsibilities<br />
i<br />
required to accomplish the qualitative pssessment of personnel<br />
knowledge and skill levels for officer and enlisted personnel<br />
during replacement training and while assigned to SSBNs, FEM<br />
Tenders, and FCM training facilities.'<br />
4<br />
The PTEP also provides<br />
for evaluation of training facilities, hardware, documentation,<br />
and courses of instruction for use as the basis for inpleraenting<br />
improvements in training and in all elements of the FBM Weapons<br />
System Training Program."<br />
178
Ii<br />
Various aspects of PTEP have been presented at the last two meetings<br />
of this association. For these presentations PTEP was in the<br />
speculative stage. This paper addresses the early working stages.<br />
To understand PTEP, however, it is necessary to look at the climate<br />
in which it oper.ates.<br />
PTEP is presently limited to the Poseidon Missile community, which<br />
consists of:<br />
1. A replacement training site - fIava1 Guided Nissiles Schoo?<br />
Dam Neck, Virginia<br />
Dam Neck, Virginia<br />
2. Two off-crew training sites - Naval Submarine School,<br />
Groton, CT.<br />
FBM Submarine Training Center,<br />
Charleston, SC.<br />
3. Wo submarine tcndexs<br />
4. Twenty-six FS!4 Submarines each vith 2 crews<br />
The submarine crews comprise the majority of the personnel and absorb<br />
the vast majority of the effort.<br />
FFiM submarines utilize a two-crew concept: One crew mans and<br />
operates the submarine while tte'otber crew is available at an off-<br />
crew site for training. Planning revolves around a six-month patrol<br />
cycle. This consists of one patrol/refit period and one training/<br />
leave period. Each man remains aboard for approximately six patrol<br />
cycles or three years: he is then rotated to shore duty, frequently<br />
at a training site; follcwing this he returns to sea again. All<br />
f<br />
personnel selected for FBM Wiapons and Navigation Training are of<br />
above average intelligence. The submarine pcrsonncl have an initial<br />
six year obligation, with the exception of the Torpedomen responsible<br />
for the launch tubes. First-term reenlistments run in excess of 40%.<br />
179<br />
. ,_- ,<br />
:
-_... ..-..-. “._..--_-- --_ . .._._-... __.. _._ _.. ._ __ --.. . .._ _ ,. . . .._- ~ -,--.. “_- .,.._- i.--* .-_<br />
.- -7<br />
These then are the personnel involved in the FBM Weapons System Train-<br />
ing Program. The training program itself is not unique in its<br />
academic concepts, but it is innovative in the area of program<br />
management. i\s with any viable training system, the FBM Weapons<br />
System Training Program first provides training objectives. It deter-<br />
mines whtrc, when and by whom the learning is to take place; it<br />
provides the learning opportunity, and then assesses the quality of<br />
the end product.<br />
PERSONNEL PERFOR.'?XICE PROFILES Ah'D TRAINING PATH SYSTE!4 (PPP)<br />
-<br />
The Personnel Performance Profile (PPP) is the basic building block of<br />
the system. The profiles are for the m&ost part developed by hardware<br />
contractors and state in a. standardized format the skills and<br />
knowledge necessary to operate and maintain a system or equipment.<br />
A system called The Training Path System assigns to each Fl3!4 Navy<br />
Enlisted Classification (XX) the profile items and I.evcls of<br />
achievement appropriate. This' same system, by use of profile items<br />
and lcvcls of achievement, assigns material to replacement, advanced<br />
and on-board training. Curricula are prepared using the profiles<br />
and Training Path System. Informal training materials in support of<br />
the profiles have been developed, and watch qualifications require-<br />
ments have been keyed to the profiles and levels of achievement. The<br />
FBM Weapons System Training Program provides a system, whereby, the<br />
required knowledge for an individual or the specified content of a<br />
course can be convenienriy identified by iisting profile items and<br />
achievementievels.
_<br />
PTEP AD?!XNISTiGTIO!J<br />
t<br />
The Personnel and Training Evaluation Program is administered by 35<br />
military personnel and a contractor civilian staff of about 10 people.<br />
One officer and an enlisted staff of expcricnccd technicians are<br />
stationed at each training site to support the testing and evaluation<br />
tasks. The bulk of the enlisted staff is at the Central Test Site in<br />
Dam Neck, Virginia. The program has a broad .:hartcr of evaluation.<br />
Evaluation requires facts. PTEP gathers its facts through examina-<br />
tions, collection of persor.al training information, ship and equipment<br />
operational history, and review of curricula with their supporting<br />
material. Examinations are the primary source of data.<br />
~0 separate types of examinations are administered by PTEP. One<br />
/<br />
I<br />
measures the knowledge of an individual against knowledge required of<br />
I<br />
him by the Training Path System. Fhis is called a(System Achievement<br />
I<br />
Test, or SAT. The ocher type of examination measures the performance<br />
of a course against its Training Path System requirements. This is<br />
called a Course Achievement Test or CAT. examination<br />
content is based on PPP items and 1s prescribed by<br />
the Training Path System.<br />
Course Achievement Tests<br />
Course Achievement Tests are desigr.ed to measur, both the performance<br />
f/<br />
of an individual in a coarse and the performance of the course against<br />
its prescribed content. The instructor does not see the test until<br />
it is time to administer it and so has no opportunity to "teach" the<br />
test. Standardization of training between sites can also be measured
with a Course Achievement Test. Few CAT's have been produced to date<br />
primarily because the standardized courses they support have not been<br />
in place. No meaningful analysis has been possible with the results.<br />
System Achievement Tests<br />
The greatest field of endeavor, and heart of the Personnel and Train-<br />
ing Evaluation System from the testing point of view, is System<br />
Achirvement <strong>Testing</strong>. The tests were implemented incrementally by NEC<br />
over a period of 9 months, starting in August 1971. To date there have<br />
been more than 4,000 administered. Each of the several tests in use<br />
is designed to cover the full scope of the knowledge required for a<br />
specific Navy Enlisted Classification. Content is determined using<br />
the Personnel Proficiency Profiles, and the Training Path System.<br />
The tests consist of two parts, a knowledge part consisting of from<br />
180 to 360 multiple choice questions and a skill part whM uses a<br />
paper work fault isolation technique which has been developed by the<br />
Data-Design Laboratories. Depending on the NEC involved the tests<br />
take 3 or 4 three hour testing sessions. Each of the testing sites<br />
is connected by a data link w.ith a central computer complex. The<br />
tests are scored, training recommended; and exam question information<br />
stored for future analysis by the computer. Test results are returned<br />
to the testing site by teletype. From the outset these SAT's have<br />
proved to be a reliable indicator of individual knowledge. The<br />
primary measure of test validity has been a comparison of test<br />
re.s-uits -wit> . the subjkctive arr
vehicle to confirm opinicns readily derived by asking supervisors.<br />
This can be seen from the nature and construction of the tests.<br />
(Figure 1 Sample Test Result)<br />
Each test is structured to provide information on the sub areas which<br />
make up an XEC and to recorrmend specific remedial action where indi-<br />
c.itcd. Analysis of individual areas provides information on the<br />
effectiveness of training in each of the areas.<br />
Personnel Data System<br />
The Personnel Data System is the part of the Personnel and Training<br />
Evaluation Program which changes a useful testing scheme into a<br />
valuable evaluation tool. This system picks up a man as he leaves<br />
replacement training. It records each man's trajning and duty sta-<br />
tion history and provides the information required for measuring the<br />
effectiveness of training at various locations and on personnel<br />
with varying training backgrounds. It also provides the statistical<br />
base against which achievement or lack of it can be measured. At<br />
present there are over 3,000 records in the Personnel Data System.<br />
(Figure 2 Personnel Data Sheet)<br />
One can thus see that PTEP provides the capability for collecting infor-<br />
mation on the training and personnel performance for which the FBM<br />
Weapons System Training Program is responsible. It also provides a<br />
broad source of background material against which to evaluate this<br />
information.<br />
183<br />
,”<br />
.,<br />
I , ‘_ -.--.”<br />
!<br />
j .<br />
: .
Y PTEP ACCEPTAXE<br />
Uhan first introduced to the fleet, System Achievement <strong>Testing</strong> was not.<br />
mot with ovcrwhclming enthusiasm. Guarded skcptisn is about as close<br />
as anyone cane to a&eptir.cc. The rigorous schedule of examinations<br />
already participated in by FBM submarine crews in the arcas of nuclear<br />
?ropulsion and n*;clear weapons safety contributed to this unenthusi-<br />
astic reception.<br />
t3inir.r acceptan& for the System Achicvcmcnt Test therefore was not a<br />
sample job. A strong public relations effort was launched which<br />
ctartcd with the indoctrination on the F'Db! h'eapons Systen Training<br />
Prorjraz of cvcrycne taking the exam as well as the administrative<br />
pqrsonnel responsible for using the results: To further aid both in<br />
acceptance cf testing and the use of test results, a policy was<br />
adopted whereby cnly the command examined and not its superiors in<br />
the chai?l of cormand was supplied c;ith the test results. This gave<br />
each commanding officer thc‘capability of evaluating examination<br />
results in the light of his oxn knowlcdgc of his ship's training<br />
needs and to make judicious use of the results. The practice<br />
removed from both the commanding officer and the PTEP organization<br />
the requirement to defend results from an untried test instrument.<br />
Perhaps the most important step toward acceptance was the procedure<br />
uhcrcby each set of test results was returned to the cormand by an<br />
officer assigned ?TEP duties. The results were discussed with the<br />
commanding officer and department heads in the light of individual<br />
and overall crew performance. Czreiully kept records allowed these<br />
PTEP officers to review trends and to point out consistent low and<br />
184<br />
.
high performers as well as anamolous results. The personal and<br />
individual interest shown by the PTEP organizations at the training<br />
sites effectively sold the testing system to the ships.<br />
Training recommendations were at first reluctantly used by the sub-<br />
marines. An early sampling showed that courses recommended by PTEP<br />
were used at about the same rate as randomly chosen courses. A<br />
graphical analysis of successive test results presented a convincing<br />
picture that use of training courses improved performance in the<br />
area where the course applied: and that the most gain for training<br />
time expended was realized when the time was applied to weak areas.<br />
This seemingly self evid.ent conclusion, convincingly demonstrated<br />
to management personnel, has much increased the utilization of PTEP<br />
course recommendations. Thr use of self-study material recommended<br />
by PTEP was another area that gained slow acceptance. As with<br />
courses, however, PTEP has been.able to demonstrate that men who use<br />
self-study material show im provcment in their overall pexformance;<br />
consequently the use of this training material has increased<br />
considerably.<br />
ANALYSIi ASD EVALUATION<br />
YJ!EP*s business is much more than improving training utilization.<br />
Since January of 1973 sixteen evaluations have been completed on such<br />
diverse topics as: curriculum management materials, fire control<br />
software training, training of submarine tender personnel, and the<br />
restructuring of a sonar training course. Evaluation of test<br />
results of individuals and crews have shohn that personal and leader-<br />
ship problems are reflected in performance trends and anomalies;<br />
I<br />
185<br />
.
. i<br />
2<br />
" :<br />
these results could give early indication of personnel problems. It<br />
must be pointed out that to commence and continue ;h program such as<br />
PTEP rigorous and continuing internal evaluation faf material, pro-<br />
cedures and current objectives are necessary. Much attention has<br />
been focused on this facet of PTEP. Due to the nature of PTEP,<br />
I!<br />
external suggestions for improvement are not tardy in coming either.<br />
OFERATIOSXL CRITERIA<br />
The Personnel and Training Evaluation Program doscribed thus far is<br />
one which closes a neat academic loop--do poorly on our test, take<br />
our instruction and you will do better on our next test.<br />
The education provided, however, is not an end in itself, but a<br />
means to an end. That end is the effective operation of the Fleet<br />
Ballistic Missile Weapons System. Correlation of training with<br />
!<br />
actual operations is of paramount importance in neasuring the effect-<br />
iveness of training. Since PTEP can accurately reflect training,<br />
V<br />
I<br />
comparing PTEP results with operational information should link the<br />
training and operating worlds. FBM Submarine patrols and equipment<br />
operation are meticulously documented. E uipmcnt<br />
4<br />
i<br />
I<br />
failures are<br />
analyzed, as are operational procedures and repair procedures by<br />
sources outside PTEP. From this wealth o c data, information should<br />
be available to approach criterion testr<br />
.j<br />
g backed by operational<br />
statistics. The problem, in this continuing effort, is sorting out<br />
the multitude of difficult-to-relate va&ables in order to correlate<br />
9<br />
operational information with testing information. To date, PTEP<br />
has found correlation in test and operational trends. Spearman<br />
Rank Difference correlations have been positive and have run as<br />
186<br />
-
high as +.83. By July of 1973 correlation of test results with<br />
operational inforcration was sufficiently strong to warrant the<br />
release of test results to the Subrr.arinc Group Commanders to assist<br />
then in c\*aluating the personnel and training needs of their subordi-<br />
nate corr.zacJs. PTEP can ir.dicate crews with an increased probability<br />
of difficulty in handling Eaterin problcns arising at sea. It is in<br />
no way able to predict casualties.<br />
CIXRENT STAX'S<br />
Whnt has been the net impact of PTEP on the ships supported by the<br />
FDPt Weapons Systcn Training Prograri? It has provided a positive<br />
means of identifying the best means of employing training time. It<br />
has indicated whcrc serious training deficiencies exist, particuiarly<br />
in supervisory pcrsonnc~. it has provided a tool for identifying to<br />
highor conmands personnel and training deficiencies beyond the<br />
immediate control of the submarine Commanding Officer.<br />
For the training camand, PTSP has standardized definitions of objec-<br />
tive and insisted upon cocpliance with these objectives. For<br />
managemnt cocnands, PTEP has derrohstrated the effectiveness and<br />
ineffectiveness of various management tools built into the FBM<br />
Weapons System Trainin? Program. It has provided a quickly accessible<br />
pool of infomatim for analyzing crew, rate, or KEC problems. PTEP<br />
has the capability and has made beginnings towards measuring the<br />
program against the product of, the previous program. Both the<br />
Naval Examining Center and the Bureau of Naval Personnel have been<br />
provided with PTCP products.<br />
187<br />
:<br />
.<br />
. .<br />
.
PT&P has made a constructive contribution to the development and imple-<br />
mentation of the FBM Weapons System Training Program. Its importance<br />
in maintaining a responsive training system continues to grow as the<br />
Training Program grows. With the addition of the automated access<br />
to information files, which is expcctcd before the end of the year,<br />
the possibilities for analyzing many facets of the complex personnel<br />
and system inter-relationships of milit--p L-l training beccme awesome.<br />
188
06/16/W G%OUP SAT REPORT SSBN 695 BLUE CR<br />
&c 3306 FIRE CTRL TECH SSPN TEST NUMBER Flf3134<br />
I!<br />
TESTED 73108106<br />
ICJOt'tECCE AREAS-PART 01 AVERAGES Ksow SHILL<br />
A FDM L.3X'OS SYSTEY I EOUIPYESTS 1 FLEET 50 50<br />
B COST cc::s3L.z 6 PYX SUOSYSTEI\:S A E7/E6 57 52<br />
C.PLATFORM PGSITSCI~~I~~G EIUlF:-'.flJi El E6 56 54<br />
D WL 6 GUID -iXTI:iG E?;IITXENT C ES 51 50<br />
E DIGITAL CCXTi;OL CO!
0. i<br />
., -, L- .- . * m<br />
i! . .‘.c’C,“d<br />
. . < i. .<br />
: ;‘- . au.dE<br />
4 ;:-a ‘.;c”<br />
I .<br />
I I i<br />
i<br />
II<br />
I<br />
i<br />
I<br />
j<br />
1<br />
I
TEE PERSONNEL AND TRAINISG EVAJXlATION PROGRAM:<br />
A Working Program for Improving the Efficiency and<br />
Effectiveness of Fleet Ballistic Missile Weapons<br />
System Training<br />
Part II Program Development<br />
bs<br />
,<br />
Frank B. Braun<br />
Data-Design L&uratorics<br />
Norfolk, Virginia<br />
The purpose of this paper is to discuss some of the problems and<br />
decision points encountered in the development and implementation of<br />
the Personnel and Training Evaluation Program WTEP). As in the case<br />
of any system development,, not all the decisions made in the heat of<br />
battle were right or the most appropriate. Many of our solutions may<br />
be of intcrcst, however, since we do have a working program for test-<br />
ing and evaluating military technical speciaiists.<br />
PTEP is basically the product of system engineering. The personnel<br />
involved in the development of the program are engineers, instructors,<br />
and technical specialists: a systems analyst and a programmer applied<br />
automatic data processing (ADP) procedures to almost all aspects of<br />
the program.<br />
Since "kho" we were going to evaluate was pre-determined, i.e., the<br />
FBM weapons and navigation technicians, the first problem we had to<br />
tackle was what we were going to use to pcrfcnn the evaluation. We<br />
started with an advantage, since a job task analysis alreadl existed<br />
in the form of the Personnel Performance Profiles (PPP) and the<br />
Training Path System (TPS). The PPP and TPS provide the knowledge<br />
/<br />
191<br />
. _<br />
.<br />
-.
and skill requirements for each technician in the program< and are used<br />
as the standards for all elements of the program.<br />
I<br />
t<br />
We began our task by searching for the most appropriate types of test<br />
instruments to test the PPP requirements. Review of the many reference<br />
sources on the subject lead us to thc"decision that the four-alternative,<br />
multiple-choice question would be the zest satisfactory for testing<br />
)mowledgc items. Since the questions Gould be acquired from many<br />
different sources, i.e., equipnent manufacturers and various Navy<br />
school instructors, a specification was developed to insure test item<br />
standardization. Along with the question, the test item writer is<br />
required to provide various information to relate the question to the<br />
PPP/TPS standards (Figure 1). For the skill items contained in the<br />
PPP/TPS, it was determined that performance tests would be the most<br />
desirable test instruments, but were impractical for a large scale<br />
testing program. However, Decision Development System (DDS) exercises,<br />
I<br />
reported on at the 1970 and 1972 MTA nettings, cobld be keyed to the<br />
w<br />
PPP/TPS standards and appeared to be a satisfactoky means of testing<br />
the tschnician's application of knowledge. We decided tc utilize the<br />
DDS as the skill test instrument.<br />
The next task was to develop tests from the PPP/T bS<br />
requirements which<br />
would measure each technician's knowledge and skill<br />
1<br />
level. The problem<br />
that became apparent in this area was the wide range of requirements<br />
Eor each technician. In the case of the Fire C"ntro1 Technician, for<br />
i<br />
example, there are 28 primary equipment profile tables, 10 secondary<br />
eq.fpment profile tables, and two system profile tables. Each profile<br />
contains approximately 35 knowledge and skill items with varying<br />
192
.<br />
OG52-004-s ‘, 1 1 1 6 P -I-5 73 01 06 11 0 6<br />
I<br />
7309m<br />
A/D COSI’ERTER. \\7lERE IS TJJE PRE-OPERATJOSAL CHECKOUT i<br />
I PROCEDURE FOR TIJE X,‘D COSVERTER DESCRIl3ED?<br />
I<br />
I<br />
I<br />
1. A/D COSVERTER TECJJ. X~SL’=iL<br />
2. Fl3SI S-J-D. >IhJSTESzlSCE PROCEDCRES<br />
3. Ff)SI ST!Il. S>\1’JG:\TJO$ OPERr\TTSG PROCEDURES<br />
4 . NAI-IGATIOS SL-IJSIS TEN XASUAL<br />
Fiyrc 1 - SAMPLE TEST ITEhl<br />
:<br />
193<br />
3<br />
; -.<br />
-,. _
numbers of sub-items under each item (Figure 2). Obviously, we could<br />
not test each and every requirement within a reasonable length of time.<br />
Our solution to this was the System Achievement Test (SAT) which con-<br />
sists of both a knowledge and a skill part. The knowledge part of this<br />
test is generated using a weighted random sampling method. Each tech-<br />
nician's PPP tables are assigned weighting factors based on their<br />
relative importance to the overall requirements. Each PPP item within<br />
a table is also assigned a weighting factor based on complexity and<br />
importance in relation to the other items in the table. When a test is<br />
generated, the computer uses the weighting factors to develop a sampling<br />
bank of all applicable questions. Then,a random generator selects from<br />
the sampling bank and generates the knowledge part of the test version.<br />
The skill part of each SAT is generated manually after a thorough<br />
review of the DDS exercises applicable to the l?PP standards. Each<br />
test version undergoes a thorough engineering review by system experts<br />
and is administered to a pilct group for further refining prior to<br />
.<br />
issuing as a new test.<br />
Another type of test was developed to use in the training courses<br />
administered at the FBM Training Activities located at the Guided<br />
Missiles School, Dam Neck, Virginia, the Fleet Ballistic Missile<br />
Submarine Training Center, Charleston, South Carolina, and the Naval<br />
Submarine School, New London, Connecticut. This test, named the<br />
Course Achievement Test (CAT), presented a different set of problems.<br />
The PPP items covered by each curriculum are indicated in a chart<br />
which relates course objectives to specific PPP items (Figure 3).<br />
This chart, the Objective Assignment Chart (OAC), becomes the basis<br />
for designing the CAT for the training course. It was decided to make<br />
the CAT a criteria-rcfcrenc& test since the standards could be defined<br />
194
010<br />
100<br />
105<br />
1OG<br />
107<br />
109<br />
109<br />
110<br />
111<br />
112<br />
113<br />
114’<br />
115<br />
116<br />
118<br />
120<br />
121<br />
122<br />
124<br />
126<br />
127<br />
131<br />
133<br />
\ 135<br />
137<br />
140<br />
145<br />
146<br />
151<br />
155<br />
189<br />
200<br />
204<br />
205<br />
2 0 6<br />
310<br />
315<br />
400<br />
500<br />
508<br />
540<br />
541<br />
600<br />
610<br />
620<br />
‘TRAINING PATH CHART FOR FIRE CONTROL TECHNICIAN (SSEN) PbSEIDON,<br />
FCS UK 88 (NEC FT.3306) (SHEET 2 OF 2) I TPC-Fl<br />
TABLE INDES<br />
FBM \Vcnpon System (Svstrm Level)<br />
Fire Control Svstcms - SI!i SS Slwis 0 nriii 1 rind Ilk J4 &xi 1 (Sul)system Level)<br />
Alignment ~ul~systcm - .\Ik 3s Sleds I) and 1 and Ilk %I 31~1 1<br />
Erection %lwvstcm - .\Ik -* Sloris 0 and 1 3nd 111; Y-I Nod 1<br />
Slissilc >lolic~n .-;ul~s~stcm - 111; 5- .\Iods 0 and 1 rind Nk Gq Mod 1<br />
Digital I{c:ltl-In hbsvstrm - ilk 53 Mods 0 nnd 1 and .\[I; 34 SIcd 1<br />
Master Clock nncl Timin z 5ubsystem - .\!I< SS .\Iods 0 :lnd 1 and .\lk 84 SIod 1<br />
Comm3ntl Slc\v Sul)svs1cm - !bIk >lr 1Iods 0 rind 1<br />
Control 3nd Displny 5ulw5tcm - .\lk S* Jlods 0 nd 1 and Nk 84 Mod 1<br />
Rate Compensation IAcctronics - .\[I, 83 Mod 1<br />
Automatic Fnll!t I:czistrxtlon Subsystem - Ilk 98 Mods 0 and 1<br />
~Iultiplcxers - .\lIi .:- .\lw!S 0 and 1 and Zlk 3-I Nod 1<br />
Digital Control Computer - Slk >* Mods 0 and 1 and .Uk S-I Mod 1<br />
Digital Control Computer to Equipment Configuration Switching and Patching -<br />
hlk 88 Mods I) rind 1<br />
Servo Group Mlsyqtem - ?Jk g3 Mods 0 nnd 1<br />
Fuze Set Pubsystcm - Xlk :i XIods 0 ant! 1 and Xk S-l .\ldd I<br />
Pnmllcl-To-Serial Converter - .\Ik SS: Mods Oand 1 and Mk d-I Mud 1<br />
Events Tr.nnsl:lto~ - Slk 2s Nods 0 3rd I<br />
Kcpbonrd antI Keybonrd Display - .\Ik 93 Mods 0 and 1<br />
Intercomputer Link - Nk 35 Mods 0 and 1<br />
JIngnctic Dir;k File - .\Ik 55 Nods 0 rind 1 I<br />
Com,Mcr Printer - .\Ik S3 ZIods 0 and 1 and Ilk S-I ~Iod 1<br />
Computer Tape Render .\Ik 1 .\Iods 0 and 2 - Slk SS Xlods 0 and 1. JIk 80 Mod 2,<br />
and .\Ik 9.1 Nod I v I<br />
I I<br />
Tape Ilcndcr Test SC?<br />
Power Distribution Suhsystcm .\Ik 68 Nods 0 and 1 and >Ik 84 Mod 1<br />
Test Subsystem -’ Slk SS Mods 0 nnd 1<br />
Test Instrumcntntion Adapter - XIk 5.J .\Ic& 0 and 1<br />
Digital Transmitter i?eccivcr - 3Ik 8s 3Iod 1 I<br />
Temperature .\lLmitorlns Potter Supply Ilk 141 %xl 1 - .\Ik 84 Mods 0 and 1<br />
Optic31 Alignment Group Sik :! Nod 0 - .\lk $J .\Iod.s 0 nnil 1<br />
Nxiulc Tent Set .\Ik -II:! Uods 0 rind I - Slk 86 Nods 0 a d 1 and .\Ik 8-I Mod 1<br />
Missile Systems (Subsystem Levei)<br />
Cuidnncc Subsystems ? i<br />
Gimbal :\sscmi~lics .\lk 3 Xod 0 and .Vk 2 Mod 1 and As’socisted Electronics<br />
Guidance Computer XIk 3 .\lod o<br />
Nissilc Tcs: rind Ilcntlincss Equipment \[I; 7 Nods I. 2, and 3<br />
Missile Test :mtl Itcndincss f:quipmrnt Ilk 6 Mods 1 and 3<br />
3Iissilc Iaclnchinz Systems tSubsvstem Level)<br />
Savigntion Suibsvstcms tSu!,system Level) li<br />
Multispeed Rcywter \Ik :1 .Uod 3<br />
Frerpcncy ‘I’imc :?nx!::rd ;I.\:. l?CQ 3, -‘3X, ;nd -2;;<br />
Optical Equipment<br />
\Vmpon System Ship Support Subsystem (Subsystem Level)<br />
Ventilation System<br />
Elcctricnl System<br />
Figure 2 - SANPLE PPP TABLE ISDES<br />
195<br />
.<br />
-
t<br />
- . . *.<br />
.<br />
.: _. . 196
adequately to support this type of test. Our biggest problem to date<br />
has been in obtaining stabilized, approved curricula for which tests<br />
can be designed. These curricula are now becoming available, and the<br />
CAT program is finally gathering momentum.<br />
Many problems arose in the actual administration of tile tests. Three<br />
distinct environments existed which required differw.t approaches to<br />
administering the tests. First, WC had the submarine crew during its<br />
off-crew period. I: Vas decided to use a dedicated Navy team at each<br />
site to administer the tests. This solved the potential problem cf<br />
having civilian engineers give tests to military personnel. Every<br />
attempt was made to avoid the "Big Brother is watching you" image in<br />
order to obtain the best possible,positivc attitude towards the tests<br />
by the examinees.<br />
The eecond testing situation was with the-submarine tender personnel.<br />
Since the tenders were located at remote sites, test packages were<br />
developed which could be administered by the tender personnel them-<br />
selves. This meant providing clear, easily understandable instructions<br />
for each test section and directions for return of the packages upon<br />
completion of the test.<br />
The third testing situation involved the CAT This test is administered<br />
by the instructor at the scheduled time in the course. For the CAT, we<br />
developed a simple proctor guide and a punched, overlay type answer key.<br />
These materials, along with the test, are provided by the local Navy<br />
team to the instructor on the scheduled examination day. This procedure<br />
prevents the instructor from teaching the test.<br />
197<br />
.- _‘.<br />
7<br />
i<br />
I’<br />
_I<br />
@$s
.<br />
The System Achievement Test, because of its objective to test the<br />
technicians' total system capabilities, is a relatil-ely lengthy test.<br />
The SAT ranges from one J-hour Session to fez such sessions, dept>ding<br />
on the particular PPP/TPS requirements. Obviously uitb a test this<br />
long, we were concerned witi: the attitude CF -2.e personnel required to<br />
take it. Surprisingly, after the initial gr-amblings, cbe technicians<br />
appear to vie2 the SAT as an integral part of their training progran.<br />
Some technicians :..-vc even provided rccornendaticns to improve various<br />
aspects of the tests.<br />
The DDS exercises require the use of the sa=e technical docoinentation<br />
employed by the techn,cians when troubleshoczing similar casualties<br />
onboard ship. h'e decided early in the develcpment of the program that<br />
close monitoring of the change status of'thc publicaticcs would be<br />
required to insure currency of the tests. Tt was also decided that<br />
the local Navy team at each site should maintain its own technical<br />
library to insure positive control of the change status of the publi-<br />
cations used with the tests. This decision created ex-.rc: work for the<br />
teams but reduced the cries of "this won't Lark" during t!le test sessions.<br />
During the first year of testing, the raw results \;ere mailed from the<br />
test site to the central site for scoring. This created a delay of 10<br />
to 15 days from the completion of the test to the time the results<br />
were returned. The pro+jran managers in the S*Jategic Systems Project<br />
Cffi;a ji:act& that tiiis turnarOund time be reduced CO enabie commands<br />
to schedule remedial training, if the test results indic-ited a signifi-<br />
cant deficiency, prior to departure for patrol. An optical scanning<br />
device and a teletype machine were used to solve this problem. The<br />
Navy team processes the examiners' answer sheets thrcugh the optical<br />
2<br />
.
scanning machino and associated data set into the computer. The<br />
computer is used to score the tests and the results are ret~rncd to<br />
the appropriate team vi,a the tcletypc network (Figure 4). Turn-<br />
around time has been reduced to one or two days, depending on computer.<br />
availability.<br />
One of the largest prob1crr.s WC encountered in program development was<br />
how to best score snd analyze the test results. One goal was to deter-<br />
mine individual strengths and weaknesses while another goal was to<br />
identify and correct deficiencies within the training program itself.<br />
Ke developed a "quick-look' SAT report to meet the first requirement.<br />
This report is based on tbc.exam.inecs' results compared to the existing<br />
fleet, or total, results contained in the answer file. Individual<br />
training recommendations are also contained in the SAT <strong>Report</strong> and are<br />
based *on Z-scores greater thdn 0.5 bckw the fleet mean. This criteria<br />
was picked art" .:ly but has worked well in practice.<br />
A more detailed analysis is conducted to meet the second evaluation goal.<br />
This analysis includes individual and group evaluaticn (i.e., submarine<br />
crew, tender crcx, instructors, etc.). PPP table evaluation, teat<br />
instruxent evalcation. These evaluations are conducted after each test<br />
version is retired and replaced with a new version. The FEM program<br />
provides a uniqac envirxzent for many of these analyses. CoLlmun i. t y<br />
idiosyncrasies include (1) relatively stable crew composition, (2)<br />
detailed hardware pcrfornance data, (3) close liaison among all nctivi-<br />
ties, and (4) extensive personnel history data. These attributes<br />
enable crew Fcrfcrmance and empirical test validity evaluations beyond<br />
those which are normaily Fracticable.<br />
I<br />
199<br />
:. _<br />
-..
c<br />
.<br />
Ccn1raI<br />
rcj:<br />
me<br />
(crs)<br />
LA-<br />
.<br />
Exmnfncc Tmt<br />
Answers- T”,“,“,“f,,“l t<br />
.<br />
Volco Phone, MscJ, Handcarry Service .<br />
FIgure 4. PSEP EDP timnunfcation Nehvork<br />
4<br />
--<br />
-a _<br />
.%<br />
-_<br />
\. --. _
One fact becane readily apparent when crew scores were compared ini-<br />
tially. T?,e difLcrenccs in higher and lower rated technicians in the<br />
various cre\.s .::i*azLy ar‘fected the group overall sccre. A method was<br />
devised to determine an "expected" score for each group based on what<br />
the group scoLe ucufj. have been, had each examinee scored average for<br />
his rate. Actu31 3"d expected scores are plotted on crew trend analy-<br />
-<br />
sis graphs anJ assAst greatly in spotting significant deviations.<br />
During test item analysis, those test items whose characteristics do<br />
not met f2stat~isi.e~ discrimination criteria arc automatically identi-<br />
fied for rr~i~.~ .tr,d prir:tct otit from the camp-zter's test item file.<br />
Thcsc test items are than reviewed by Central Test Site personnel to<br />
deter,:ine if ti.cy ;hould rc deleted or revised, or if they provide<br />
an in2icatloi: c“). ir :rJining problem.<br />
b-k rcccgnizcd i;.tr!y in the PTEP devclcpcental phase that exter.aive<br />
computer data rlocessing xould be required to handle the quantity of<br />
data Lo be ~out.~r~:oJ within the system. Accordingly, the following<br />
five major ccmputrr program sublsystems were developed (Figure 5):<br />
1. Test G!::eration S-!.svstem. The test generation subsystem<br />
includes the programs necessary to produce the tests.<br />
a. Test reference data &i<br />
F'le load and maintenance program<br />
creates and maintains personnel records.<br />
0. A L)eisor.;;el :report program retrieves and prints out<br />
the conterlts of:selected personnel records.<br />
I<br />
C. A pdreornel survey program prints out key information<br />
from all personnel records.<br />
.<br />
:<br />
201<br />
.- - --- .<br />
. -_. .<br />
. .
. . i<br />
f :<br />
. -<br />
! .-----L - _ _ _ .- .-<br />
- __ _--- ..-<br />
_--
I<br />
3. <strong>Testing</strong> Scoring and <strong>Report</strong>ing Subsystem. The test scoring and<br />
/<br />
reporting scbsystem includes programs for data transsission,<br />
for scoring and reporting of test results, and for update of<br />
required pernanent files. !'<br />
a. An inptit transmission (teleprocessing; program enables<br />
remote transmission cf test data via optizal scanning<br />
equipment.<br />
b. Test scoring and reporting programs assemble test data,<br />
score the data using stored ans'**er keys, update fleet<br />
norm files, and prepare reports.<br />
C. An output transnission (teleprocessing) program transmits<br />
test result reports back to the originating test sites.<br />
4. Test Analysis Subsystem. The test anai.y:is subsystem manipu-<br />
lates acccxulated test results data to piovide infonzation<br />
used in evaluations of t,Csts, personncl,\and training.<br />
a.<br />
b.<br />
C.<br />
Item statistics probrams compute at&activeness indices<br />
for eadh test iten response, discrimination index for<br />
each test iten within an overall kno<br />
ledge test part,<br />
and suary difficulty and discrinini: tion indices by<br />
knowledge area, TOS level, etc. An/engineering<br />
I<br />
rev~2w<br />
report is also prepared to list thoke test items failing<br />
to meet a specified value of the discrimination index.<br />
A score analysis program computes /i; ttndard T-scores for<br />
each exaninee, computes actual and expected average<br />
scores for each examinee group, and identifies exm,inees<br />
and groups for later, manual evaluation.<br />
A skill test analysis program recomputes skill exercise<br />
scores based on final performance data accumulations and<br />
computes summary statistics for USC in evaluation activities.
5. Query subsystem. The query subsystem provides flexible access<br />
to the various files to select, retrieve, and report<br />
information for evaluation and management.<br />
The majority of problems experienced during development of the ADP<br />
support systecs were cormnonplace; the normal qrowinq pains, problems<br />
with formats and record contents, and so on. Pertaps t2.e most<br />
interesting (and frdstrating) set of problems occurred in connection<br />
with transmission of test data from the test sites to the computer<br />
faci'ity. Our cquiFnent provides Wrrrcte access via “BTFJlI”, Basic<br />
Teleprocessing Access Method. Ice prcscmed, zs did the vendors with<br />
whom we worked, that the remote access would be the least af our<br />
problems: you hook the pieces togethet, write a Little program, and<br />
everything works. Uifortunately, it did not fall together that easily.<br />
If you have been in u similar situation , you probsb1.y have experienced<br />
the problems, sometimes hunerous but more often maddening, that we ran<br />
into trying to obtain information, and to purchase and coordinate<br />
hookup of equl;:-ent from the telephone company, the vendors for the<br />
mark-sense reader and the data setst and the computer manufacturer. It<br />
appeared that we xere reinventing the wheel. The system finally jelled,<br />
and today we have a fairly smooth remote access capability.<br />
Most of the PTE? ADP scpport programs ale fairly typical COBOL appli-<br />
caticns. One, however, is not: and I will describe it briefly. As<br />
data tecji4.r. LO acc~i~ulhitl, +oyir,naiuraiiy began wanting to use it.<br />
We had planned to create a xbspstem similar to the Management<br />
Information systems so much talked about in the past four or five years.<br />
204<br />
:<br />
.‘* ‘.<br />
- :
-. .<br />
Unfortunately, we could not precisely define our information require-<br />
ments: we were accumulating quantities of data to which a seemingly<br />
endless number of questions could be addressed.<br />
We settled eventually for a general purpose "query" system. This<br />
system can access any computer readable file of fixed-length records,<br />
including, of course the PTEP files. It works in three stages, start-<br />
ing with a request for information:<br />
1. A man processes the original question to define a logical<br />
procedure for extraction of information from the file,<br />
or files, that contain relevant data. His function is<br />
essential; he designs the algorithm to extract the<br />
desired information.<br />
2. A COBOL program reads the algorithm and writes the imple-<br />
menting program (also in COBOL).<br />
3. The new program, which is unique for the specific question,<br />
is compiled and executed, producing a report tailored to<br />
answer the original question.<br />
The system capabilities are extended by allowing the output from one<br />
question to become the input to a second question. Basically, if we<br />
have accumulated the relevant data, and if the question can be resolved<br />
to a set 0E logic equations, the query system can extract the needed<br />
information. One major advantage of this system is its flexibility.<br />
We can change our file structure around without losing the ability to<br />
readily access all data.<br />
205
Looking to the future, we are working on new analysistechniques which<br />
will be required as the CAT program develops. Since thgse tests are<br />
criteria referenced,<br />
i<br />
the traditional difficulty and discrimination<br />
formulas contained in our test analysis programs will not apply.<br />
Several new types of test instruments are under investigation in the<br />
hopes of decreasing the testing time and lovering the cost of acquiring<br />
new test vehicles. It is anticipated that the lessons learned in<br />
development of the PTEP for the Poseidon program will provide a sound<br />
footing for the forthcoming Trident program.<br />
206<br />
. . .<br />
!<br />
I<br />
.
USITEI) STATES ARW VAR COLLECE<br />
~csrninLq, vaiues , attitudes, Lbili: l Lxod qrndtng acadecrc sys:.?m at the Arm-; War Coile?pc, the<br />
-;,‘,-,501 .iL t;lc to? of the !.iZy SChOl system. Tne rationale ior :he<br />
L;I;.CCR (eac’r. year less thsi. 4;: of i!le el?gibLe officers are seicc:cd to<br />
;::czd :1-.5 W-r College). J:ese sruZer.?s do not rcr;u:re “he motivational
I<br />
.._<br />
Lilt group oi hip:i achlcvc;S t!lere ij obvious and ?OSiiivZ corraljtjon<br />
t,etveen grading with resultant “rl?lative standjng” and student ar.xiety.<br />
ihc W;rr Collcgc experience prov idi% Rn educationai opportunity for<br />
ilie stujc~~t to rcilec: and lenr:] in an unstressful cnvi ronmefit wi LII<br />
3 cinfmm amount of szute;lt Anxiety-hence, no Eorzsl:zed grzlin;<br />
System and no relative stanGing- cr ‘?.oncr g.-Gduate.”<br />
Since the mission cf the t‘s Army liar Coilege is to educate and<br />
since learning cannot be asco_:,Li~Iicd k’i;t:ou: feedback, the :nsk is<br />
:G pro-ride inventory ar.2 feedback !>ut nt the same time minimize student<br />
nnxiecy. The 11 ew inventory 3nd assessment program nttcnpts to provititi<br />
;~er~onal and confidentiai ieedb.lck to the student without :he inf(:r-<br />
r.ation bring part kf his cfficinl rccorcis,<br />
:xmbcrs of the experimental R:oop ;o oarticinate ii1 Lhr new Invcntorv<br />
and Assessment Program. Four xacukry scn’oers have been assigned ;c<br />
work wjth the 60 studen:s wha wiii participate in camprchcnsivo personai<br />
inventories 2nd professional nssesssents. T”ne assessment is accorql ~shel<br />
through the use of five separa:e asscss.;.ent technjques:<br />
a. Baseline lnv2r.tory<br />
b. Opt ions I ?ersor.al IWCtOry<br />
2. Trofessi~ncl Se;“-.:ssz:.sz.:nr<br />
!<br />
.<br />
: :‘.<br />
” I.
. .<br />
E;se~rne !i:ventorv<br />
'Tllis is a battery of measures designed to describe!for students,<br />
di*vciopmcntal coaches, faculty, and the Army the skfllJ, abilities,<br />
f<br />
and characteristics of the senfor army officer at a point in tine just<br />
before he enters the lzs: anl ili.giISt echelon of fcrmai executive<br />
, f<br />
acvf~opaect, the i~S.hKC. 2:e student must be given e fairly comprehen-<br />
sive ;;icture of his owir. skills, abi'iities, and characteristics. If<br />
ire is to be encouraged to develop himself proiessionally and personally,<br />
he needs at least R qtiasi-rstionai basis for determining which areas<br />
to work on; and in the interests of attenuating anxiety, the student<br />
needs to know how he stands in skiils, abilities, and characteristics<br />
in reiation to his student con:cmporaries.<br />
A second an~illnry rat ionrlc for the baseline Inventory relates<br />
ciircctiy to t!le sta:cd goal of "tailoring"<br />
!’<br />
I<br />
the educational experience<br />
:o fit' tile iniividual studesit. The 3aseLine /inventory provides, in<br />
I<br />
cssencc, a recoxaissance Of :ha incoming ciass and of its individual<br />
members. The reconnaissance permits--assuming a reasonable degree of<br />
pedagogical flexibility--adjustments in the ;Irriculum to meet the<br />
measured needs of the :iass, or a,f specific indlviduals. In tiie p.ahe<br />
. .<br />
0: tar,orin:, for t!12 indivl
variety of areas related directly 0: indirectly to the curriculum,<br />
The inventory can answer, for exampie, a question such as: "Does<br />
this student know the field of human reiarfons weli enough to write<br />
a book, or, are the prl*lcfples and terminology of tfie field so unfami-<br />
liar that he'mus: cxer: extra effort to read or c!iscuss the coursp<br />
material?" .<br />
The Baseline Inventory can achieve several inventorv and assess-<br />
n.ent 0hjectivc.G. The s tx!ent , conpnri::g his inc'isidual profile to the<br />
competencv range envelope of the ciass, has a rational basis for<br />
piannin;: his own program for proiessionai and personal develo?ment<br />
cc .g., attend workshops for "->eak" areas). The studen:'s counselor<br />
0: co:1ch, StudvinE the profiies of his counselees. has a more o\,jcctive<br />
basis for understanding and coaching his counselees as unique inciividuais.<br />
The faculty , studying the group profile and comparing it with baseline<br />
measures of yevious student classes, h3S an objective tooi for.curri-<br />
culun modifications which recogntie dliference in classes from year<br />
.'o year. I.S\W , studying individual. profiles can acconpiish (with<br />
I<br />
cornouter support) several desirc5le objectives: (1) :aflcr s:udcnt<br />
committees toc;ard equivalent Skiii composltlori; (2) identify students<br />
who are weak in .? number of areas and require careful coaching programs;<br />
(3) identify, for recruitzcz: ES perzai.e;lt fzc*d:y or for use as associate<br />
1nstructcrs, rixse s:
?hc first Hasc1ir.e ;nve;rtory was xailed to A’. ?L students early<br />
in the year prior to the time ttey reported to the College. The<br />
inventory was scored and returned to students is “Feedback.” Addi-<br />
t ional Iv :!K i.1culty was p rovidcd with pertinent data as a group result<br />
cjf ti;t* ir.vcnrory to obtcir. 1 (‘i;;ss profile.<br />
-7’<br />
. r;e re are sc~~.~crai rcco ;sizea shortcomisgs of the Sssel ine Lnventor1<br />
izscruxbzt. Lath al-e :aiated ro tI7a fat: that :!IE ins:runent ts a<br />
self-invenrory-- the individuni assesses his own leveis of skill in<br />
areas as deflncd by the instrument. WE can assume that some individuals<br />
will intentionally oversta:c their own capabilitles. We can assume<br />
T.iSO ttls: SOrne indivlciuais will zisintcrpre: or no: fully comprehend<br />
the impiica:ions f-f ttie area descripticns. These possibilities are<br />
1>,>rze in lind when intcrprct:nx .GivlduaJ profiles. tiowever, for<br />
.iy,~if2i',iltr prOfilL?aS* the .L,ortcom:xfis a r e Less sl ct’icant l~eca:~sc<br />
.:rror.s occur in both direct ions; e.g., so:e indivluuals will understat<br />
ir,tentjo:ally their oxn ab,lltIes. The variatioxs also provide the coach<br />
with one indicator of tSe stcdent’s seif-concept.<br />
Fur:hcr . while we have thus far (for purposes of illustratioc)<br />
discussec t;lL 3a;eline inventory is a sfagle instrcment, we shouid,<br />
in reaiity, consider it as serres of reiated data collection. tests<br />
and sur;eys . Initially, we beEan with the 8zse:ine inventory designed<br />
aro;;nd currlcuiun objectives. 2ver time, we wi>l substitute validated<br />
and es t ~:lis;Aeti measureme;.: inscrcxen.cs Por portiox cf the Baseline
tr**is avail,;h:tb for measuring rcac!!:\.: :Jmpre’!:cnsion, listeni?.; ,.,~~.pr~-<br />
!l,*n::Ion, and writing skills. Ad ~!\a Educational <strong>Testing</strong> Service, given<br />
~le.rrlv-stated learning objec:ives .ntd supporting course material,<br />
,*.\;I ‘love1 op a c;RE-~ ike conprehcns tw \%smtna:ion with alternate ;ram, i s cesigcec t5 :iSSiSttne stuc!er,t Ln this dif:‘icult<br />
i+lCrox;xctivti task. The rr.~~t.doioj:~ of the OF1 is far iess strsctured<br />
:n.w that of the Raseline Lnvcntor::. Ctcreas the Saseline Invent,~ry i s<br />
\wwuctcd prior to tne student’s arrlvs! at the LSXKC, the Opticr,al<br />
i’ttrsonal inventory is dependent for<br />
t<br />
:CS success upon a sound student-<br />
co.r,:h relationship and canno: coxexc cn~il s.uch s reiationshfp hss<br />
be\*r; initiated. A ht:ery cf inve,::o:les are avafLb;e to students<br />
,I:> :;n o,,;zonal basis. ix;u~ed :n :;ie :a:tery acz ixtr-menus to zca-<br />
;:I;,*~\ :~:.tituces. F;)ezkir.q, :e..~:.:;; :cC; u:;cL~g ability, pers.onaL~ty, need.+ .<br />
\\rc,\t:ona; interests, (::\I. ‘i.- -.-AC& '--al cc.x~~sg s~sskcs with incczing<br />
I<br />
212<br />
3
?-hcrc arc twa sources of evidence which suggest the strsr.6<br />
potentid of the Wi concept.<br />
Itt the executive development field, one of the most pronis inE and<br />
rapidly-&rowing :d:hods of developing executive potential at higher<br />
lC!VC’%-. i s the assesszen; center. Esecctives v i s i t these cer.:ers .I::d<br />
particrp.z:e i:: 3.. “3ssessmnt week,” whereix aeasLrcs,fobscrvnt LXS<br />
isire rak c f varims skiils and characteristics whkh have b~::e:: fmnd<br />
tL: be ,:los.aiy re:ared to esecucive proficiency. An incezsivs “I*tid-<br />
back” stxssfon :‘uila~s the assessment week. &ring :i:e feedbac:; se8 ion,<br />
tLe zpccu:ive is<br />
. . . siven objective and professional in:crpretacion cri<br />
t?.e t%‘ilSUL cs made luring assesscent week. This is privileged inior-<br />
.<br />
c;lt:cn; t:.lc staff ss’essor is Cot a n?emoer of ;he parer.: organiz.:lon;<br />
azc :h+ .ndividu;rl csecutive is free to do what hc wistles wi:h the<br />
reed3z:ck iaforrzaticn he is give;: (writter. and grap.:ic Z;iCrlai. LU$jo<br />
ad vldt-o:ape cassettes). The L’S Amy icvesrigatec the ?c
WUork-up” (scoring, interpretat ion, grognosk, pr$cription) ai the<br />
avsi Inblc Jat.1. The s:u&nt grou? was highly enthuslastfc .i?~out this<br />
Y<br />
in-dcbrth objective excrc Sac in self-rkvelopaent . ‘{here was virtually<br />
t~;~~~imous a?:rcwwnt that it was ext rmely valual,L-t il. :crzx iri ;rerso:ln;<br />
;7roiessidn.ii self-assessrrent is to izdividualizc feedohck to the :;tuticr.:<br />
re,:.?rd;r,g t;:s attexnpts :L* iearn, cind his personal, Trofess :k-:,.r. ar,d<br />
4 edlershi; Li-:ciopert: to ?:OV:JC ftAt*ack to :hc instructor rcca:dinr<br />
.
and ~413. if presenzed with :he examination ques ticns -I:, an official<br />
capoc i:y, would respond as indicated on the solution sheet.<br />
'111~ studtxt, hy self-assessment, compares his answers with those<br />
cn the solution sheet. lie i3 then given the opportunity to discuss,<br />
cA?ha:e, Or.SUppJrt :he coapc:.e2Cs Of ttie expert Solution--oreferably<br />
with ;!le cxgert himself.<br />
From this cocpari son and Discussion prozcss there ur;doubtrdly<br />
!n: indivi,! ualized disc-ssion betwee> student and faculty.<br />
A: ;;he compietion of the exaaina :ian-coaparfson-discussion<br />
prGC?SS. tile stcdmt turiis his paper i? to the fzculty member. There<br />
is a clt>ar stipLiati02 :hat r.o paper be identified with the individual<br />
atudca:':. name. (An "apt ioiiri cr. Sipiing exacinatioc papers wou!.c! have<br />
cocrc ivc :mplicacions). The Turpose 0f this turn-in of csanination<br />
papers is to provide the facclzy with enpirfzal subject!ve feedback<br />
on ho= veil leaminp objectives were achieved.<br />
Experic::t.nl Assespxent<br />
Experiential assessment cz;:si.;r.s of Lhree s+arata assessment<br />
tasks conducted x!lfle the stbcen: iS levo:fn,: his entire educational<br />
e i :’ 5 t t t\j +rtlci?a:fon in r. ira-2 problem-solving exercise. The<br />
,<br />
hhsashxr.i .dr c,lir ?;las, : 2d ;2r.-22 '<br />
i<br />
"experiential assessment" because<br />
iiii! CeS2trcl7 djectivc ai.d ct.2 grxp problem-so1vii.g method represent<br />
a ::od~l G: 2i.e ?rofesSion;i requfremen:s :ca; the student wili xost<br />
‘\ ,<br />
*<br />
.- _-- .<br />
:<br />
_ , _ . . . -.-<br />
Q$i;ib
,<br />
I<br />
I<br />
. . .<br />
-‘- .- -.<br />
216<br />
\
\’ 5 i ‘cl e ;..: : .1 ?d invcarczv<br />
. vc :xiv, good evidence that change Occurs witbin the student<br />
Y<br />
drrr ir.2 t 1~ I’.SAKC \‘CB , r - we assume that choqcs ociur also in other<br />
t<br />
;trcns--in funct~onai, academic, ceographiz, ntxt intrrperson.31 abilities,<br />
rc r w.n,~;c. i:ut WC 2;~ ixrd,?ressed :o s a y jus: what hss change;,<br />
a+ kow xus!~ , .lnd i n W;U: Iizect;or,.<br />
3lC assessment cask of u!le Value Gained ;nvrntorv is an attempt<br />
fir ieZdr:i;;C tke io:al develo?menza: rff ect (t:poa the studenr) cf the<br />
yecr spnt at ;he US Arzy Liar Col?egs. The s:uleat needs this infor-<br />
. ..- -: A^ .-. Ordzi co r*v;cw cn,‘<br />
. . . ..CIU.. -.. _ zaqrecizce tk2 p;c:essioca: :nd personal<br />
2veioyxcr.z broil&: 3.x,: oy cl:‘= cotoi rduc.3::0~31 experience. Sucn<br />
iievelc--<br />
,...c;.:. ex:cnd;ng fsi tWOIUi ZAXC q&La: iox,<br />
fnc \‘nwiue ikir.ed Ixrntor!. ;s ever.+ .:,;t ical :o :irr War Co;iej!e<br />
i:S*ii. t cu: educational<br />
I<br />
sysich nas acr.;evec.<br />
. .<br />
* it!; currfcular objectives.<br />
;.,-.p reef fnccs ';a snswer wi:!-.<br />
“i;;?at sort of mar, does the JSdh’C >iOdUCe?'<br />
i<br />
i<br />
.I +:<br />
sane 3recis23.: ,.I: of:c.?-as Led qucscio-i,<br />
217<br />
I
%tLitlS:iCdll)’ significant Gilfctexes, on gairincs af data for the<br />
sam.~ individua1S. strongly suggest effects directly attributable to’<br />
I ilc* i'::Aedi: \‘f.‘ilT.<br />
1~*.3;lrt;lKki,r, i f rc3 :: be .?.OGif:dj to include a larj7er Se,-;r.e:it ,Ir t!lc<br />
.s:iaer.ts a; t!w A:zy War CoLIege. Those portions of the grsAr:m<br />
ueter.r.ir.ad to be of marginal value wiil be eliminated.<br />
During ik? ,*cr.duct of :;le ;ro~;;as certain basic principles :.iuSC<br />
!!C :'oI:;w&. i'irst, :he stucien: parCicfpa2Cs xuai realize the pm-<br />
:; !‘ .-A -I i 'I,; ic>:..:,! :
�<br />
official records or reports which Identify toe individual. 7-h c<br />
relationship betvcen the student participant and the faculty coach<br />
must be a privileged rciationship.<br />
The ot!wr principies e.g., honesty, empathy, respect, concretcntss,<br />
.seLf-disciosurc, and inxdiacy, of an effective aelpin; program arc<br />
:r;so important. Since t!ie program is on a trial basis this year :ilel-e<br />
is SO,ZW skepticism and an~rehension by a few of the observers. For i::c<br />
most g~cr:, everyone participt:!ng in the inventory, assessment and<br />
feedback c:ior-c is enthus& and confident it vi.11 enhance the cduc:;-<br />
tional indcavors a t Lhc’United States Army War College.<br />
219
SUBORDINATE MTINGS: Iv’liP NOT?<br />
W. H. Githens and R. S. E-lster f<br />
U.S. Naval Postgraduate School I<br />
The notion of having subordi??tes rate their‘seniors is one<br />
that typically engenders a great amount of feeling. These<br />
authors became interested in the topic while teaching graduate<br />
level psychology courses to’military officers when we found<br />
that a quick way to pravoke class rtiscussion was to assert<br />
that a system for gathering subordinate evaluations of seniors<br />
seemed deserving of experi=lentat iO,l. Over the last several<br />
years, therefore, we and a r.umber LI tiur students have conducted<br />
some studies concerning subordinate evaluations. It<br />
is to these studies that we will now turn.<br />
TWO GENERAL SUKVIXS COSCERSIXG SUEORDINATE RATINGS<br />
ln 1971, Lt. J. G. Bloomer. i’s:;, working .with one of the authors,<br />
sent 1100 survey questionnnircs lo t!ic U. S. officers attending the Xaval<br />
Postgraduate School. Over 350 officers responded to this survey concerning<br />
subordinate evaluation. Lt. Bioomers’, survey Focused on the<br />
/<br />
acceptance of subordinate evaluations wizen t!le,y are for the senior’s<br />
use only. Thus, his instructions read, in parit, ‘I.. .The purpose of this<br />
survey is to discover whether the s:udents of /;;PS feel that the military<br />
officer might be benefited b> a program such a’s this, in which he i.s<br />
evaluated by his subordinates, with the ratings submitted directly to<br />
him solely for &s own use. Ti;k responses obtained to this survey are<br />
summarized in TableT<br />
TXBLE I<br />
Results of Lt. .Johm Bloomer’s i urvey<br />
o f Sa\*al Postgr;lduate Stud ‘nts<br />
B<br />
Attitudes Concerning Subord;inate<br />
Lvalunt ions (X=350)<br />
i. 20 you feel such a periodic survey , y his enlisted men<br />
P<br />
would be a signif ican: help to theAjunior officer?<br />
Yes - - - :hybe If no, go to part III.<br />
Note: Lt. 31oomi:r grouped the number answering “Yes and “Flaybe”<br />
into “those iinding some merit in the progran,” since in the majority<br />
220<br />
-
Ii<br />
of cases the “Elaybe” answer seemed to indicate concert with the nechanics<br />
of the program, not necessarily the value.<br />
Total finding some merit in the program:<br />
Overall (all officers responding) 74.5%<br />
LT ;tnd below 83.1::<br />
LCDK and above \ 9<br />
Army (of 10 responses) 80.0%<br />
Ftarines (of 22 responses) 45.5%<br />
2. Should srrch a progrxz be restricted to the junior officer only?<br />
Ses 15.4% NO 83.6%<br />
Xote: The most common response Lo "levc: it should be restricted<br />
tom wa; al:<br />
hel.pful all<br />
3.<br />
4.<br />
5.<br />
DO<br />
on<br />
below flag rank; however, m;lny felt such feedback would be<br />
the k“iy to cm.<br />
y,u feel that suctl :I system \;o:rld 11nve a significant effect<br />
enlisted morale? ,<br />
scs 61.52 so 15.x:; Don’t know 22.7%<br />
If you -acre rated by your subordinates, would you considtl<br />
these reports:<br />
Quite seriously 33.49e Casually ?. .z<br />
9G.RX<br />
Seriously 57.4? 0 Other 5.3::<br />
- -<br />
If you found such reports adverse, would you attempt any changes<br />
in your le;ldcrship techniques?<br />
Ses 72.72 X0 0 Don't know A27 32<br />
Note: This question elicited mxty comzxents, and in most cases the<br />
"Don't know" answer was a qualified "Yes". That is, after consideration<br />
of the source, her: LIIC program was run, whether they were isolated reports<br />
or a trend, etc., a Leader,ship style change might be attempted.<br />
6<br />
6. Do you fcei that Com&nding Officers might gain by having a<br />
similar evaluation m!tdc of then by their junior officers:<br />
Ses 76.4;: X0 14.0% Don’t kr.ow 9.6%<br />
221
Note: Lt is interesting to note that many who had held command<br />
commented that this ieedback would have been especially useful to them.<br />
7. Do you feel that such feedback might have been of help to<br />
you at any time in your career? If yes, when?<br />
Yes 87.9% :;0 12.1%<br />
::ote: The “when” in t.!:is case was generally as a LT and below,<br />
but many responded that it would help at any time.<br />
8. Do you feel that enlisted raters would conscientiously attempt<br />
an accurate evaluation oi their superior officer?<br />
Yes 74.2% SO 7.22 Don’t know 17.6%<br />
9. If such an evaluation system were adopted by the military<br />
services, do you feel that the raters should be restricted to:<br />
;:o restrictions (all levels of subordinates) 58.3%<br />
Petty ‘Officers only 34.6%<br />
0)ther 7.1%<br />
So t e : l-Ill2 “Other” iz this case generally was accompar.ied by comments<br />
suggesting a graded firading system; that is, officers and CPO’s grade Ci) &<br />
SO while Clli.ets and First Class rate their Department Head, or some similar<br />
groupings. ‘i’here is certainly some merit to this idea, if only to reduce<br />
the amount oi paperwork.<br />
_ .- _ - . -- - -.--_-_--_ .._<br />
The results of this survey appeared Lo indicate two points conclusively:<br />
We quote Lt. Bloom~:r, “(1) There is a general concern for improving<br />
. .<br />
officer/enlisted 1 ines of ~rJmaun:cat lea. (2) The ovcrwllelmin~ majsrity of<br />
respondents indicated a high degree of respect for the judgement of the<br />
enlis:ed man of today.” The resider should keep firmly in minci, however,<br />
that 1.t. Bloomer’s survey had as n z&Jor premise that the subordinates<br />
evaluations would be for tile use of only their senior.<br />
A L about the same time, Lt. Clooaier was conducting his survey, Githens<br />
surveyed 68 USS officers ;lt tile Savy Postgraduate Scl~ool. Two of his<br />
quest ions involved the issue cf raLinps by subordinates. The questions<br />
� nd the responses obtained are presented in Table Il. -ihe reader should<br />
notice thaL in the instructions to Githen’s survey, unlike those to Lt.<br />
Bloomer’s survey, the use of Ltw data from subordinates ratings were not<br />
explicitly restricted LO USC only by the senior.<br />
222<br />
/<br />
I
Results of a Sur.ey by W. Ii. Githens of 68 USX ’<br />
Officer/Students at the i<br />
U.S. Naval Postgraduate School<br />
As part of an overall evaluation, what do you think about being<br />
rated by your subordinates? ::<br />
An excellent idea 3<br />
A good idea 13<br />
A fair idea 17<br />
A poor idea 34<br />
Assuming that peer, subordinate, self, and the<br />
by superior” were in cperation, what should be<br />
to the total evaluation? Assign a weight from<br />
system. The sum of L&W weights must add up to<br />
Rating by peers 13.g7*<br />
Rating by subordinates X.00*<br />
Rating b> superiors ?2.23*<br />
.<br />
Rating by self 5.79”<br />
I<br />
b i<br />
* ?ieans computed over sample of 68 offic,er/students.<br />
-.__ -.<br />
I<br />
present “rating<br />
their contributions<br />
0 to 100 each<br />
100 units.<br />
The results of Git!len’s survey seem to show that officers are less<br />
sanguinc.nbout subordinate r.rtings than did the reisults of Lt. Bloomer’s<br />
study. We feel that this difference in results stems from the psychoiogical<br />
sets given by the questions and instructions fo the surveys. Lt.<br />
Bloomer’s survey addressed using subordinate ratings as a feedback vehicle<br />
for the cognizant senior, while *:i::hen’s survey asked the respondents<br />
/<br />
about using data from subordinate ratings :~hen evbluating officers. These<br />
results lead to the rather obvious cone-usion that / the use to which subordinate<br />
evaluations are put will influence the reception they receive<br />
from those who are evaluated. Thus, it is necessary to understand the 4<br />
fears and concerns that officers have concerningasubordinate ratings.<br />
lJ . _<br />
In his survey. Lt. BloLjmer asked those who were opposed to the<br />
type of subordinate rating system he proposed to comment on ;heCr reservat<br />
ions. Table 111 contains a representative set of, t1.c ccmments that<br />
were obtained.<br />
._ -<br />
,<br />
:’<br />
223<br />
I<br />
!’<br />
:
TABLE III<br />
Representative Comment:; of Officers<br />
Opposed to a Subordinate Rating System<br />
as proposed by Lt. Bloomer<br />
LT "This system oi evaluation would turn into a popularity contest<br />
between'junior officers." Xote: This conxzent was the most<br />
coumion reason give? for the negative response.<br />
LT "1 believe face -to-face discussions are more productive."<br />
CDR "ll~~d~rrxint2s dixip! i3t2. .A cru:c!l for poor lenders who don't have<br />
the abili
I , * \<br />
LCDR “I feel a junior officer is very formable in his early years and<br />
should be guided only by iiis seniors, not his juniors.”<br />
LCDR “The evaluation would seem to put too much emphasis on molding<br />
the junior officer to suit the needs of subordinates, when in<br />
fact the molding should come from the top -- providing of course<br />
that the superior is capable himself and takes advantage of<br />
opportunities to observe the junior cf f icer.”<br />
LCDR “One would tend to cater to the wishes and xhims of subordinates<br />
if only to protect his ego when he receives their ‘biased’<br />
evaluations.”<br />
LCDR “Feedback should be welcome in any form. Lt enables one to see if<br />
points of emphasis are coming across. It should be entirely<br />
optional on the part of the enlisted men and should not go throug!k<br />
any chain of command, i.e., leading PO’s or Chiefs.”<br />
LCDR “The Lypical junior officer is too sensitive to the opinions of<br />
his subordinates. I+ trying to please a11 of them, having no<br />
way to differentiate between serious comment and sarcastic,<br />
he could ruin himself, in !lis oh-a image. He could lose whatever<br />
self-confidence he ilad built up.”<br />
LCDR “ I iee? the success of such fecdbisk would depend upon whether<br />
the recipient could take criticism, and if he could, effect<br />
changes where necessary, There is .i Jonger of an impersonal<br />
system such as that described bxoning d routine, meaningless<br />
exercise if the recipient canntit communicate its worth by<br />
viewing the feedback seriously and making changes where nec2-sary.”<br />
LT “Problem in m.iintaining anon>-mity Tomes ~JF when the officer has<br />
very few subordinates. ?lay IlPc’Ci t c> propose a system in wliirh<br />
evaluations are only made wfnn the officer has, for example, five<br />
or more subordin:ttes;” Xote: Some form of this comment was often<br />
made and it is probably true that maintaining true anonymity in<br />
a small unit would be difficult, it not impossibJ.e.<br />
CDK “The evaluation strould not be forced upon the subordinates. It<br />
should be optionaL whether or not you evaluate your senior.”<br />
LT “Since seniors 113762 a hard lime evalualing juniors and our fitness<br />
reports are a highly controversial topic. don’t you t Aink nn<br />
uneducated individual ..:ould have even a more d?fficult time<br />
efficiently evaluating seniors?<br />
225<br />
/
LCDR “I rton’t: t~~:l a .PO is qualified to tell me how to do my job!”<br />
Li “I do ttot hclieve that enlisted men, in genertil, have the background<br />
to evaluate tit? perforrx\ncc, since personnel tx~nojiesent<br />
is ot~ly j);trt of IJY job as a Saval Officer. Ziost cou:d ilL\t<br />
evaluat,\ my shiph:ndling ability since most have no experience.<br />
They c~u\J not ev>..u;?te my n3tr’riA. and f innncial mana~r‘ment<br />
since t\trt\’ do not i.ax*c the trail:ing I h;tvc, PLc., etc.<br />
LT “L do t~crt i~rl most younger Fti’s ctnd non-rated cxn would be honest<br />
Jnd objc~tivt> i n nuking out such cv.rluaLi,)ns. Also, I t-CC1 :L<br />
would lro ,i ifiicult to keep theso evaluations irom tk cixxerned<br />
oificer’ll superiors.”<br />
J<br />
Li “I brlic\*tib i11,tt Lhc average sailk-r working for a yortng .lcJ is not<br />
i’lturt! ~*no~~gh to tzake the type of comments which uould be beneficial<br />
.”<br />
LCDR “I think ;\rk2 whole idea is a bunch of ----! If you c.111 subject<br />
tht U’S. ,O the same pressures, hold then co the S~PZ responsibilities,<br />
~t2., Till3 they are in .A poai:ion co rate hit objectively,<br />
and not Iw!ore.”
Two Critical IncLient Studies Concerning<br />
Subordinates’ Views of Officer Ef fectivtness<br />
p,,lh of the studies to be discussed used the crit.ical incident method-<br />
,,logy (Flanagan, 19jS) to gather responses from enlistad men concerning<br />
effective and inrfftXtive officer perforxnxc.<br />
Tlw first study used enlisted personnel as.signLd to two Savy Jet Aircraft<br />
Atr xk Sc,a.idra:.s. Onto inntired sis XC:I rcprfsentii:g pay grades E-2<br />
fhr~uglk E-9 were included in the sample. T;w questionnaire form asked<br />
the r~~Sp~It\leltt t ,,--describe an oiiicer’s bch.lvior. in tlie situation during<br />
your !iavv experience r;lxn you considered XI xt ion on : he part of the<br />
-t)ffict?r<br />
for whom you vxked to be the besr resample of :~n effective Naval<br />
officer.” Essentially, the sxx quest ion I;AS then later presented, asking<br />
the respondent to describe an iwffectivci Xdval officer. The reader should<br />
notice that the questions address eftectivtiness and ineffectiveness in<br />
rather broad terms, and :‘xi such as unit ~,x~ls or mission accomplishment<br />
were not provided. These questions to the respondents should not hsve<br />
servtd to preclude responses concerning. say, discipline by the officer,<br />
or tl,c lack of discipiinc by the officer.<br />
s4<br />
‘0<br />
Effective<br />
Sav;31 Officer<br />
Human :ietds<br />
Ineffective<br />
Saval Officer<br />
34<br />
Profession31<br />
Compet encc 24<br />
227
9<br />
5<br />
18<br />
Effective<br />
Naval Officer<br />
106 respoc.!ents<br />
i<br />
Trust in 1<br />
Subordinates 13<br />
Involvement<br />
if in Job<br />
I’crsonal<br />
Characterist its<br />
Su Itesponse<br />
Ineffective<br />
Naval Officer<br />
.’<br />
?<br />
7<br />
- 23<br />
iFrespondents<br />
To give the reader a flavor of the incidents obtained and categorized,<br />
the following paragraphs list sons r?scrrpts of representaLive incidents,<br />
Sensitivity to fluman Xc&s, fifeciive:<br />
An enlisted man was having problems with drugs; this<br />
CO set up a drug abuse programlwhich really helped<br />
. this man and some others. I<br />
I -.<br />
Sensitivity to Human Sc*&s, ineffective:<br />
m<br />
i<br />
After a plan: crash in which the pilot was killed,<br />
this officer conmentt*d, “z-en can be replaced”.<br />
Professional Competence, Eiicctivk:<br />
Ship was anchored in Hcng Kong when action had to be<br />
taken due to a typho
Trust in Subordinate, Effective:<br />
The department head gave me the assignment and left its<br />
completion up to me.<br />
Trust i n SuSordinatc, fncffect~ve:<br />
:.;y divisisn oificor caused me to’ lose a launch by ,insisting<br />
rit;~t the gyrs)s on an air--to-air missile wouldn’t<br />
be ~n~ag~.i when power was applied to the plailc, oven<br />
thoug!l all of us in the shop were csperienced ;wJ. told<br />
him ile k-,35 mong. He let us know he didn’t feel ilo<br />
could be? icw us.<br />
involvement in Job, Kiioctivc:<br />
During iarrier qualifications, pilots in three planes<br />
in a rrx Laid bra&s w!lile goi,lg ofi the catap.rults.<br />
‘iiw .iivi=ion officer SJV WC’ n~dcd !~lp fixing the tires<br />
so hc! pi:i iled in to get the job done.<br />
i’r’rsonal Ci;.3r;l~‘tcristi\-s, !neffective:<br />
i:, z.:: last ~qu:ulron, my CO wts a ~:t‘.ruy drinLen an officer, whom you worked directly for,<br />
did something that contributed directly to the successful nccompliahmrJcnt<br />
of your units mission. L?>;actly what did this person do that was heipiul<br />
229
to you or other parsons in the branch/division/department?" The critical<br />
incident question addressing ineffective performance was similar to the<br />
one above, but asked about something that an officer did that actually<br />
delayed or hinder& accouplishnent of the units' mission.<br />
The content analysis of the 269 critical incidents received in this<br />
second study was conducted by a group of officer/students working independently<br />
from the group who content analyzed the data from the first<br />
critical incident study. The results of the second critical incident<br />
study are summarized in Table V. (h’here it would not do violence<br />
categories found in the second study, their labels were changed to<br />
thdse found in the first study.)<br />
TABLE V<br />
Broad Categories of Responses Plade Concerning<br />
Effective and Ineffective Qaval Officer<br />
Performance - Study 11<br />
(X=289 Savy Enlisted ?len)<br />
to the<br />
match<br />
Percent of the Total<br />
Catecorv Responses in this Category<br />
1. Trust in Slibk\rdinates 26.8<br />
2. Professional Competence 24.3<br />
3. Involvement in Job 13.4<br />
4. Sensitivity to Human Seeds 1l.h<br />
5. Training of Subordinates 10.4<br />
6. Conmunicatiens 6.5<br />
7. Safety 5.0<br />
8. Discipline 1.5<br />
9. Over Famili.rrity with Subordinates @.7<br />
A comparison of Tables IV and V shows that there is a high overlap<br />
between the categories derived from the critical incidents gathered during<br />
the two studies. The second study (Table V) yielded five categories not<br />
found in the first study (Table IV): Training of Subordinates, Communications,<br />
Safety, Discipline, .ind Familiarity with Subordinates. The first study, on<br />
the other hand, yicld& a Personal Characteristics category not found in the<br />
second study. The nsthors suspect that the difference between the questions<br />
in the two s;urweys -- Kk second addressing mission success while the first<br />
,_ .._. .-..<br />
i<br />
-.‘-. - -- . . . .._ .’ ._ . . .._... .
did not -- was the n~,jor reason the categories found from the two studies<br />
did not map even btttrer onto one another.<br />
Two categories in I‘able V probably require some additional comments,<br />
as Lhey might seem to s+rbstantiatr? scrm~ of :!le reservations held concerning<br />
subordinate ratings;. these are the categories labeled: Discipline, and Over<br />
Familiarity with Subordinates. The incidents under Discipline typically<br />
mentioned punishment of a group. For instance, “Punishment was given to<br />
the whole radio gang by rcscinding,spccial liberty when one individual was<br />
at fault far not delivering a Ceisagc’,” was one of the incidents in this<br />
categoi y. Inciden:s in the category labeled Over Familiarity with Subordinates,<br />
all roierred fo cases in Gricil junior officers bypassed senior<br />
enlisted r.:en in the ch.lin of comm;ruc! Uiicn dealing with enlisted men. The<br />
individtul reporting such an itzzide:!; app.-lrt?ntly felt tl~e chair. of command<br />
V;JS bypassed bacause the junior ufficr?r WS too familiar with some of the<br />
junior enlisted men, and that this hindcred mission accomplishment.<br />
_ AS w.rs the case oi t!,c first study, none of the critical incidents<br />
gathered dl*ring the second study sccotxd of a petty nature. Instead, they<br />
usanl iy address& periorn:uxcs : !~.tt thi> .;avy uould ci:her r;ish to reward<br />
or cxt Lnguish. S+ith.zr study providc‘d support for the no:ion that subordincLe<br />
rarillgs would be greaily inilucnced bv officers adapting strategies<br />
of ingrar iatiun in de.11 kg with Lhrbir sLbordi:latrs.<br />
Discussion I<br />
scars.<br />
Tile use of suhardinate ratings h.ts been discussed for mAny, many<br />
Dased on armchair analzrses oi the topic. it hss bren pointed<br />
out that subordinsccs hre inpa position to observe some of the performance<br />
of their supervisors. In m:uly cases titk subordinate is in a unique<br />
positiott which ai fords him opportunity LO observe supervisory performance<br />
that, is inaccessible to orhers. Assuming that more iniormation available<br />
c’oncer:ting various .~spc?cts of a supk*rvisors p+rformance will permit a<br />
better ev,jluaLit)n r*i his over nil performance1 it follows that subordinate<br />
rat ittgs s!~ulJ b+: used. But evc~ Lttot~~it this w:rs all pointed out long<br />
.tgo, subordinate r;f~i::gs have nut bwn utiliz *d in practI.ce. This non-use<br />
is cvidcntly based on the conceptions of a “ pularity game”. limited<br />
perspective, etc.<br />
II<br />
?‘Iw most signiiisant finding of our critical incident studies in<br />
this area, i s 1 hat Ltw bases used by subordjnat cs ior their evnluat ions<br />
of their superiors were not consistent witf d such conceptions. They were:<br />
for t!tc nust p;irL, based on aspects on which mo:-t supervisors t;ould want -.<br />
to be rated. The n.3vzl oif icers. who as part of their academic training,<br />
Sathered the critical incident information for us were originally suspicious<br />
.\f, fearful oi, and gcncrally opposed to, subordin3te ratings<br />
prior t0 their w,>rk in this area. iitcir switc!l in attitude after having<br />
gather-4 and nnaIysed r!le critical incident information was dramatic.<br />
231
Subordinate rating% of a sort, have recently received an upsurge<br />
in use in academia. Students’ evaluations of teachers or professors<br />
are no longer an uncommon practice and at least one firm, the Apex<br />
Corporation, uses subordinate ratings. The main resistance to their use<br />
seems to be based on possible misccaccptfons about what the subordinates<br />
would actually be racing. If this is so, what information could be<br />
generated to correct these possibie misconceptions? k’ould.it be necessary<br />
to have all supervisors do studies in which they gathered critical incidents’?<br />
With out recent renewed attention to the “people” aqect of organizations,<br />
subordinate ratings may be mortl important and possible now than ever before.<br />
Since our critical incidc\n: studies argue strongly against the<br />
xeasons usually given for not halving subk>rdinate ratings, the question<br />
should again be asked: SuiZrdituttc Katinrrs: Lay Sot?<br />
,<br />
f<br />
i<br />
1<br />
232<br />
3
References<br />
Flanagan, J. The Critical Incident Technique., Ps~cl~olpgical Bul.l.etin,<br />
1954, 51, 327-358.<br />
Lutrell, H. Performance Appraisal at Apex Corporation: Discovery -- Or<br />
Delusion? Sational hsiness, >!arch 1?72.<br />
.<br />
233
Thc~ Kecslcr Study - Electronic Technicians Four year<br />
Evaluation or’ Turce Types of Training<br />
r’<br />
Virginia Zachcrt. Ph.D. Consultnnt ’<br />
Fledicnl College of Ccorgin<br />
Augusta. Georgia<br />
r<br />
I<br />
(For tin> 1973 <strong>Military</strong> <strong>Testing</strong> Associ.!tion Conference<br />
San Antonio, Texas. 31 October l9i3) j<br />
..i : 1.<br />
i f-<br />
IXTRODCCTIOS<br />
A Dcp:irtmcnt o f<br />
:!<br />
Dciensc servicetc’st program W’.IS cst;lblishc~d i n 19S5 a t Reesl(tr<br />
Air Farce Bnsc. Xississippi, to explore ::.lys to reduce, the* tin,1 and cost of training<br />
non-prior service airmc>n to p,lrforn as cltictronic cquipmcnt repairmen. mo occup3t<br />
ion3 ! specialists and two narkcd ly di ffercbnt ;Ipprc.lchL\s to training were compared<br />
with a control group. During the year 196b. 441 gradu:it~*s of the four expcrimontal<br />
course’s were m.rtchccI with 294 control graduates .,i corresponding tx~gulnr courses and<br />
assigntad to field units in four commands. Job periormince data wt’rc collected throughout<br />
their first active duty tour and annlyzk-d to coap~rc~ their success on the job. ..<br />
There ~t’re f ivc major evaluation steps as fotlo6:<br />
TECRSIC:UES I’SED<br />
Xi tar-Course Profir icncy Test nt Xecslcr Tcchinicnl Tr;Lining<br />
Ccntc>r the first crc~ck following graduation. (Appendix A)<br />
First-Job Profic’iency Evaluation - first :cn’ weeks on the<br />
job. (Xppcndis 6)<br />
e 1 i<br />
Delayed Field Evaluation - nine months after completion cl<br />
resident training. (Append is C)<br />
Second-Job Proficiency Evaluation - n[trr two<br />
years on the job. (Append ix D)<br />
First EnIisLncnt Termination Survc*y - at<br />
initial active duty tour. (Appendix E) f<br />
Th-o following tecllniqucs were used to gather e<br />
Pcrfcrmnnce and written tests administered to graduatt-s.<br />
Job Proficiency appraisals by iicld supervisors.<br />
Nailed qucstionnlires completed by graduntcs and their supervisors.<br />
Field uisi.ts by cvaluntion ccam members.<br />
Survey of records at Air Reserve Personnel C~~ntc~r.
FINDINGS<br />
I. After over one v,t;tr oi graduate field cspcricncc. iindings indicated that:<br />
,\ccordittg’ LO ~~:~II;ICCY and thchir supervisors, training provided in<br />
Y the control (regular) courses is very closely nligncd with the needs of<br />
: field units and is compatib;c with cstnbli>hcd unit OJT programs, career<br />
dcvelopmcnt COIIL-SCR. and specialty knowlk*ds:c tcsLi.ng r~~quircmcnts. Field<br />
supervisors almost uncnli~tousl~ oppos~b � ny rrduction in the current scope<br />
and quality of residcbnt technical training.<br />
The shorter scrvicc tc st courst s \;ith drastically rcdttccd clcctronics<br />
fundantcntal s tr:iinittg (“:;” cocrsc’). vhil~~ ::i)t c~~~pi~*eely unsucccssfitl, rcsill<br />
ted i n b~.Ver;ti i:nporc;tnt tr3inin.C ~~iici~~~ci~~s rv vocitl i n their ndvcrsc<br />
crit icism of tit‘. “S” typl* cxpcritx*nL:tl training tilcty recrived.<br />
lht* SIIJrtL.: scrvicc> test co~trS\*s. A: icti cmp!tasizc~d ~‘i~ctronic<br />
flltld.l:Th’ll~;l~d Lr.tinittg bit; de-ezpit:ts iZ,‘d ? pltntt’t wubst;tnLi;tl dif ficul tics<br />
;tmong Lhcsc d~~ii~i~;tciC~s in g;radtt;tl,* it~)h proiic icattcy wc’rc:<br />
L;tck of sttfficicnt initi.11 skill in using published<br />
tcchniC.il d3Ln (tr~citnic,tl ,,ord~rs. m.titttc’n;tnc~* mnnuals. etc.)<br />
Ton 1 iLtlc vcrsntiliig rind skill in using stnnditrd<br />
test cqltip”cvlt.<br />
T o o limitc~d a knowlc~d~,~ t>i ~*lcctronic systems<br />
,>pcr;iL ian 3nd m-1 i ittc-nnncc t,.citttiqu~*s (this wns very<br />
pronouncr.d for nlrcr.?ft cqttiprr*nt. rcpairtttcn becnuse<br />
ctwy t;,,rca rc~sponaiblc f o r n.tint.tit;ing st*vcrnl diffcrcnt<br />
L\‘ix’s 0i cqtlipment ~vet3 during: ini tin1 job assignments).<br />
Gr;tdu;tLcs wt*rc niso v0rv crl.ti:.*r of tl*is training.<br />
Evidc*ttcc L O date, Al thou& inro3pictc. indicates that if field<br />
condiLions r~~ntsin tltc sxx~ ,zdoption ei Lhc shorter courses would:<br />
~cxtttcc ittiLi;tI j d b proficik%ncy.<br />
KcflccL ndditinnai t:;tininl: worklo;td nnd loss of<br />
produc L iv,* work into opcr.ation.tl ttnits.<br />
~ntl,~r prcst.nt st.?nd;lrds. irxreasc SET failures.<br />
the<br />
‘.<br />
t<br />
i 11.5<br />
per-<br />
. .-. ._<br />
,- - :<br />
.
II. After approximately thr\bC _vcars of the Service Test Program. findings indicated<br />
that:<br />
The cxpcrimcntal graduates comperrd favorably with the control graduates<br />
after three years of job espericncc and considernblc formal training and<br />
job oriented tr:Lining.<br />
Experimental gr:lduatcs and supervisors wk*r‘c\ still critical of the nbbrevintcad<br />
type* resid~.nt trchnic;ll trilining rcccivcd b y tilt. cxpcrimcntnl<br />
graduates.<br />
If iicld conditions remain thr same. adoption of the shorter courses<br />
would apparent 1;: bL* Lhc same as stated c;lriier.<br />
ISFEKESCES THAT CAN RE.?I;U)E<br />
Tbc servic
I<br />
Append ix A i<br />
Sample of After Course Proficiency Trst (Pcrformancc)<br />
. ’<br />
.,I 237<br />
..I<br />
-.<br />
i I.<br />
, i<br />
.
.<br />
!<br />
. a<br />
Appendix A<br />
Sample 0: After Course Proficicncv Test (Written)<br />
SA:.:f: At-SC<br />
C!.;\SS PATt: TEA.\1 .<br />
I<br />
30 ?X,T IR*TE<br />
:s ‘-PACE<br />
H t LO iv<br />
.<br />
:0 --.
.<br />
’<br />
,<br />
.r<br />
I<br />
’ I ,I’<br />
239<br />
Append ix I3<br />
S.1:np1~ of First-Job proficiency Evaluation (10 uwks)
Appendix C<br />
SXlp IC of Delayed Fit? I d Evalucltion QucsLionnaire (Graduate)<br />
__-----e---e.- --- ._I_.-- _ --__---.-<br />
._ __-.--‘v-- -----.- .- .-.- -.-- -_._. --.---.e---t<br />
i<br />
!<br />
sour -. ,,:.a’ ----<br />
I: !, I __y---T.-.---.<br />
1 . . .<br />
. .._a ..- - - - -<br />
: ;
Appenarx n<br />
Sample of Delayed Field Evaluation Questionnaire (CrAduate)<br />
N-X = 137 N-C = 137 .I . 1<br />
� � ��������� � ��� �� ���������␛� ��������� � rmripl*r<br />
DC Fundamentals<br />
Vacuum Tuber<br />
Solid State Devices<br />
Indicators<br />
Test Equipment<br />
Troubleshooting Logic<br />
Use of Han ,to.>ls<br />
IUse of Maintenance Publications<br />
Spare Parts Acquisition<br />
Other All omissicns not shown in totals.<br />
241
1. GRADUATE’S JOI PROFICIEMCI: bd,c.*. (k.<br />
,,.du’.t.‘e dr..sr.r.d r.fa.mcy I . ..&I.#<br />
*I+. ,rinr,,l.. ‘4 wch .Y D ,.ev - C.1.r 1. lh.<br />
Timetm<br />
tify) All ominsicm<br />
. . *<br />
.A --.<br />
242<br />
-,.<br />
.I.<br />
c
i<br />
Appendix C<br />
Sample of Delayed Field Evaluation Graduate-on-t~!eTJob<br />
Trainability Questi.orwaire (llnlt ProJcct Tecbn~c~an)<br />
-.s ; x-1:@? ::s = c-1 11<br />
b. Check tht* appropriate squar1.s in thcs table bylaw to rfilcct your best rsti;natc*<br />
0~ the gradL:a;e’s performance in the TECHSICAL TRAISING progratns oi paragraph 4a.<br />
243<br />
‘><br />
><br />
- ._ , . .c- _ -<br />
,. ..<br />
-:
8. In your judgmentwMch of the foUm COUTB~S in the DCD,~SLF Smvlce<br />
Tyt Prcgrm would you mwxmatmd for tmlnZng ftxture redar repaIm?<br />
1<br />
tell 107 ‘I cl "C" (Regular Cc&r01 Course)<br />
Amount of Uaef'ul i;ork<br />
Produced<br />
Qudityofuork Pro-<br />
Practlc<br />
Ekwlc clr<br />
ApDucati<br />
sh<br />
u3<br />
of Ten1<br />
08s<br />
i<br />
I I-J I - I l-2 L-2 1 - L.-i J<br />
I - I - I - I<br />
. :_ . . .-<br />
244<br />
_<br />
F .: ,_.<br />
,
Appendix D<br />
Sanplc of Second-Job proficiency Evnlu;ltid? (Supervisor)<br />
*N=S-134 c-135<br />
/<br />
1. SW~VISOR’S EVALUATION O‘i GZAMJAT”; J!B K?!Or&EDciE<br />
INSTRUCI?CNS<br />
2 Please check each kncwledfp item llsted’belou in the approriate<br />
column to Indicate your jua@Wt oi? the gr8dmU’8 subject knmledge<br />
1f .evel. Base the rating on yaw obaemtion of this grapte’e psrforaance<br />
)i rhile under your supervision. I.<br />
Scale<br />
Valup,D%INITION<br />
I -j<br />
1<br />
Xot applicable to or has not demmstrated his<br />
NA knowledge ir, his current assignment.<br />
f D ,m.LUATION. Can evalusts conditions and make<br />
rdecisicms about t.be 8Ubject.<br />
AKALYSIS . Can anal+yze facts and principles<br />
C and draw coriclusims about<br />
the subject .<br />
PxmCIPLES. Can exp:3In relation-<br />
B<br />
ikip cf basic facts and state<br />
general principles about the<br />
subject.<br />
FACTS. Can Identify<br />
A lacts and terns about the<br />
s;;bSect.<br />
TR;UI!TNG S’XiDA~D I ?‘!2!4<br />
1. . A c CfrCtit8<br />
s.<br />
----9.<br />
I<br />
sclid Slate Pc3vkes<br />
- - _.-__ y--.---,<br />
4. ,‘Vacum Txbes<br />
5. Oscillators<br />
_---<br />
5. iieceiver3 Principles<br />
t-<br />
’ 7. Xotors and Servoxechti;m<br />
4<br />
8. Zaveshaping Circti ts<br />
___ -___--<br />
3. Hicrau3ve Print i$es<br />
-------<br />
FJX’LIO:; ASP TkCTPXL U3E Oi-’ Acjl iU%R<br />
ZUIl’? 91 T<br />
13. Search Radar<br />
r<br />
1 11. Height Finder ?Mar<br />
I<br />
Il2. Cap Filler Radar<br />
:,<br />
; i
-- ----- - _--_- ..__ _____<br />
Append ix D<br />
S.mpIc~ of Second-Job Ptoiicicncy<br />
Evaluation (Supervisor)<br />
COMPETENT. Can do a<br />
-- ____ ____~______<br />
.l . MiitlEl9t9rs<br />
_. .._._ _.-_--- -.‘. _- -_ __._..<br />
0 0<br />
.- _ _<br />
.2. Vecum tube vgitmtsrs<br />
-<br />
0<br />
-<br />
0<br />
.3 � ������ genbrst.'rs<br />
.4. Oscll2oszopes<br />
-.---- ---_-__ ._ ..__.____ ___<br />
5. * spec*.r:2n an(rl32-“3<br />
3 2<br />
-I I<br />
___-- _____._.__._ __ ..___. _. ---_ ) ._-._- 5<br />
.6. Power amtars<br />
---.-I_-_ -_ 5 -I<br />
.7. cays+~l checksrs I, I<br />
- - - - - -<br />
.P. Tri‘nsistor ts.:terl<br />
- -__-...-- _._ .- -_ -_<br />
‘0. Frequexy meaeurlng de’tices<br />
-__-_-- -_--.--------<br />
4 .?<br />
‘1.<br />
-<br />
HTI ewiha%crs<br />
-__---._--.-. - i . .._... --__-<br />
‘1<br />
- -<br />
z<br />
-<br />
‘2. Z’ransponder t.eat a n t s ’ ,’<br />
_-._.___ -_-_-_--.- __ _____ -_-_ -.;:- h 3<br />
---_ I-<br />
‘3. Select& L,he p:-oper tast eqtipmx,t and<br />
repair too'rs iLn mainkhir.kg Kii<br />
equlpn9nt 3 1<br />
_- __.___-- --.- __...- _ - ._._-_- - -<br />
'3 . 3ecognizes dafeztlve test 6q.iipaant<br />
� %rough indications cbt.a: r.c3 dur:ng<br />
operaticn > 1<br />
-.‘. - -<br />
246<br />
12 15 50<br />
3<br />
.--- -<br />
‘1 84<br />
- -<br />
18 79<br />
0 53<br />
4 65<br />
d-28<br />
8 5.1<br />
2 22<br />
9<br />
7 48<br />
4 13<br />
- -<br />
3 15<br />
8 58<br />
h 59<br />
!3 ‘16<br />
‘/I<br />
I2 9<br />
I.3 10<br />
15 8<br />
I.? 9<br />
!9 2’<br />
I9 1-I<br />
52 4 6<br />
58 5 5<br />
!7 18<br />
- - -<br />
47 80<br />
77 7 5<br />
12 0<br />
I5 9
Xppccdix E<br />
Sample of First Enlistment Survey (Graduate)<br />
2. Lhte separated from the Air Force:<br />
3. Base of separation:<br />
day month year.<br />
.<br />
7. >lilitary pay grade a: thy tin.4. of reparation from the Air Force:<br />
OE- 1. C)E-2, (JE- 3 , OF.-4, OE- 5. OD.hrr<br />
5. Marital states:<br />
a.0 Single h. Cl M.arricc! c. 0 Divorced d.0 Widower<br />
6. S:rmt;er oi dcpcbrtdc-nt s:<br />
i. Do you h.avr any plans at prvsent fur rc~turning to active military duty’ 0 Yvs,<br />
OS& If yes. plc.ase lrst date. 11.1v mont~n year.<br />
8. arc you now a member of’an .Ic‘tivr rescrvc unit ? OYCS, OSO.<br />
a. . In year judgment. did the* t.*rhnical training yau received at Kreslcr AFB adequately<br />
prepare you for satisfactory career progression in the Air Force? 0 Yes, (J No.<br />
If no. please explain briefly.<br />
IO. Please check d the following statement conccrnin.. your employment:<br />
0 F‘uIl Time. OPart ‘lime. OL’nemployed<br />
11. If you are now self-cm~loyrd. give a brief description of your work.<br />
I’<br />
-. If pan arc now employed 1,:; someone other than yourself. please complete the followi-g<br />
Items:<br />
b.0<br />
Type burine.ss<br />
c.0 Drirf dcscrip:ion of work<br />
13. If you are employed. does vswr job require any know!edt and e-xphin<br />
bricf?y.<br />
247<br />
.
I Observation:<br />
I<br />
c<br />
Appendix E<br />
;<br />
1<br />
Smplc of First Enlistment Termination Survey [Supervisor)<br />
N = X-83 C-80<br />
Is your rating hasrd upon a personal observatton<br />
performIn< this tar;k ? .<br />
Y<br />
of the graduate<br />
DF.r;REF.S OF SfPERvISIOS :<br />
1.<br />
_ . _ _ _ ” - . .<br />
PERI-;,RS!.\SCE I..ASKS i<br />
x’c !X2C x3c x4c<br />
2. Uses of t.-chr.ir.*I >lxblil.atinnr =:zch as:<br />
I IL<br />
I<br />
f<br />
t<br />
a. 1Virir.L: dit
PROCESS VERSUS PRODUCT XEASURES IX PERFORWCE TESTING<br />
William C. Osborn<br />
Human Resources Research Organization<br />
Consider the f ollovizg si:uations.<br />
titer having undergoce training ia task gunnery a soldier's proficiency<br />
is beicg tested 0;1 a g~~nr.e:y range. During the course of this test he will<br />
fire several nain gun rozds a: targets varying in size, shape and distance.<br />
In each case his score is deternixed by whether he hit the target within<br />
some specified-time limit, arid he is certified a tank gucner if he scores<br />
above some ninirnrul level required for qualification.<br />
Under other circumst2-aces, a soldier having uadergoce similar training<br />
may be evaluated differectly. Let's assume that azzunition is scarce or<br />
that adequate range facili:ies arc not available. Here the soldier might<br />
have to be.tcsted under dry-firing conditions. He would be required to<br />
take actual or miniaturized versiocs of targets under fire, and's tester<br />
would assess in each case whe:her the yzner 1) acquired the target with<br />
smooth manipulation of tte hand coztroller, 2) correctly ra;lged on the<br />
target, 3) achieved the proper sight pfcture, 4) squeezed the firing switch<br />
wfthou: losing the sight piciure, and 5) fired within some allotted time.<br />
Here tfre gunner is qualified if he performed each of the five procedural<br />
steps correctly on some zinimun nutber of targets.<br />
In the first situation described, a task outcome or product measure --<br />
tar&e: hits -- is th6 basis for evaluating gunners; whereas, in the second<br />
i<br />
instance correct task p:&ed&xe or a process measure is ehe basis for evalua-<br />
iion. Tnough somewhat oversimplffied, the contrasting approaches to<br />
249
performance testing dram in these two examples illustrate the focus of this<br />
paper: the use of process versus produc: neasures in performance testing.<br />
I am chiefly interested in the us2 of performance tests to evaluate the<br />
results of training, and :o properly set the stage for what I have to say<br />
today let me first summarize what the training eval&tor considers to be<br />
the ideal use of product and process measures. Performance tests are used In<br />
training evaluation to serve trio purposes: (a) to certify student achievf-<br />
mat, and (b) to dis&nose weaknesses in th2 instructional system. In the<br />
use of such tests, Profici2ncy measures which focus on task outconco (products)<br />
normally provide data relevant to the first purpose, whereas measures of her:<br />
the tasks are carried out (process) pertain to the second. For example, the<br />
number of targets hit by the tanker trainee r=ould be the product measure by<br />
which his qualification ss a tank gurner is assessed. However, if he fails<br />
to qualify we would also Eke to know W;I~ - where was his training weak?<br />
This is where process measures are useful: if the gunner consistsntly<br />
. .<br />
missed :argets, was it because he ran&ad incorrectly, or obtained an irqroper<br />
sight picture, or r-asn't able to maintain the gun lay during firing? This<br />
type of data is useful in diagnos9.g areas of training deficiency, and Is<br />
essential in efficiently rsmediating trainees.<br />
3x1s we sac the roles piayed by product and process measures in training<br />
evaluation. Both are iqortant -- even critical -- when used for their<br />
respective purposes in evaluating the results of training.<br />
Product and Process as Xeasures of Stuiant Xchicvencnt<br />
k'fth this background i wsoitld now like to narrow the focus of my comments<br />
to the us2 of these types of p2rforLance measures in serving just the first<br />
’<br />
_--. .-. -. .<br />
./’<br />
250<br />
. I . ,s-. VII<br />
-<br />
- :<br />
a. 1<br />
'
.<br />
of the two evaluation purposes stated above - that of certifying student'<br />
achievement. ,Xn testing a student to dcteraine ifi,hc: is qualified to<br />
advance to the next level of training, or ultimately out of training and on<br />
r<br />
to the job, we Gould, as mentioned, normally prefer;. to use a product score.<br />
Before a man is certified as a gunner we would likk tohnve him demonstrate<br />
that he can hit targets; or before certification as a navigator he should<br />
actually demonstrate that he can get frcm point A to point B; etc.<br />
Although it may safely be said that every task has a purpose -- the<br />
fact of the natt(:r is that in practice a great many perfomance tests are<br />
used chich em?131 process measures o.:fy in evaluating student achievement<br />
or job readiness. Khy is this the case? Is the substitution of process for.<br />
product measurement justified? If so, when? If not, how nay the test<br />
developer inprovt: his methods? These are the questions that I will be<br />
addressing today.<br />
Product, Proces s and Tvpes of Tasks<br />
Before exploring in yore detail the i&sue of why process measures are<br />
I<br />
so widely substi tu:ed for measures of task produc:, it will first be helpful<br />
to consider three categories of tasks:<br />
I<br />
1. Tasks in cbich the product i' the process.<br />
2.<br />
-f<br />
Tasks in which the product a'kdays follows<br />
from the process.<br />
!<br />
3. Tasks in !:hich the product cnv follow from<br />
t!.c prucczx.<br />
P<br />
Relatively fev tasks are of the fir&t type, thGse in which product-and<br />
process arc one and the same. These arc normally tasks which serve an<br />
aesthetic purpose such as g}mnas:fc exercises or springboard diving. Close<br />
251<br />
_. I<br />
.<br />
‘.<br />
..- . .<br />
b<br />
._ * ::<br />
.-L- ---- . . . ,- ._<br />
;’ .:<br />
: $. p: pt<br />
,<br />
:r<br />
i /-<br />
4
1:<br />
order drill is a good military exanple. Here we see that the outcone or<br />
product of the task is no more or less than the correct execution of steps<br />
in task performance -- that is, the process.<br />
?!ore tasks are of the second type mentioned, those in which the product<br />
invaria3ly r'ollows from the process. Fixed-procedu:e tasks typically fall<br />
in this category. Troubleshooting an electrical circuit, disassembling a<br />
rifle, and implanting a land mine are ex&nples. In tasks of this type the<br />
procedural steps are knob-n, observable, and comprise the necessary and<br />
sufficient conditions for task outco?.e; so if process is correctly executed,<br />
task product necessarily folious.<br />
A great man; job tasks are of the third type >*here the product is less<br />
that fully conditional on the process. In other words, with these types of<br />
tasks the process nay appear tc have been correctly carried out but the goal<br />
or product not achieved. This can happen for one of two reasons: eitf;er<br />
(a) because w are unable to fully specify the necessary and sufficient steps<br />
,.<br />
in task performance or (b) becaze we do not or cannot accurately measure<br />
then. In aizl-firing a rifle, far exaqle, WC are interested in knowing if a<br />
soldier is s:anding with his body properly oriented to the target, face<br />
properly positioned on stock, rtfle sling in correcr position, lead arm<br />
I<br />
perpendicular, acd firing arc1 parallel to ground; if be is breathing correctly,<br />
has a good sigh: picture, and squeezes the trigger. Presumably, if this<br />
process is followed the rifleman will hit the target. Assuming that we have<br />
identified all esseztlal steps in rifle iiring, and, further, that de can<br />
reliably measure their c,orrect execution, the:1 tbe task is of the second<br />
type described above and dock zot belong in Category 3.<br />
1<br />
However, in practice,<br />
we know that ou:: best efforts to cvalucte execution of this particular task<br />
are not sufficient to warrant substituting process for product ceasurezen:.<br />
; .! .<br />
,<br />
-<br />
_ ’ ----_ - . L<br />
t<br />
,<br />
I *<br />
-‘:<br />
:<br />
*(<br />
. . .
In ether words, somez1mes the target is missed even though in the judgment<br />
of a skilled evaluator the rif1es.n did everything rig%. Therefore, either<br />
because we are not absoluteiy certain tSat we have idez:ified nil recessary<br />
steps in the firing process or because ;re cannot accurrrely assess rhe execu-<br />
tioa of some of then we ultimately qualify s rifleman on the basis of w-nether<br />
he hits the target.<br />
In reflecting oa the r.nture of these three types cf tasks as iqortaat<br />
implication ezcrgcs rebsrding the role of product zeasurezcnt ir, tes:icg<br />
task performance: Because of the interchangeability of process ;;d prsduc:s<br />
for tasks of the first two zypts, it doesn't really mat:er which measure is<br />
used to assess proficiersy; but for tasks of Type 3, product zeasurezeat is<br />
very iiaportan:. Ix; 2iie of this., in praztice,perforzce tests for zany of<br />
the latter type of tas'ks do no= attccp: to nensure product. h-h;- is this so?<br />
Problem in Product Veasurecent<br />
The reasons largely stem from practical considerations in uhi?h the<br />
ceasuremnt of task pror’,uct is viewed as either too costly, too daqerous,<br />
or for other reas0r.s si-qly too iizpractical. In testf-zg such performances<br />
as hand-to-hand co-oat, for exazplc, rhere task prcc!u:t would t&z the fox.<br />
of disabling a hostile eceay, the fest developer is normally E&ted to<br />
requiring the denons:ra:icm of tack process. Similarly, in a first aid task<br />
like controlling the bieedixg fro= aa external wound, the person teizzg tested<br />
is, f;r obvious reasons, asked ol;ly to demonstrate the process. Or, in<br />
removing a jai2i2ed r0ur.d iros a vcnpon, it is cocsidered iz.?ractical to<br />
actually jm a round ic order to create a valid test situation, so agaic<br />
only the steps in task perfoxacce are measured. Man,- sudh exazplea are to<br />
,<br />
-. .-‘-- __-<br />
253<br />
- .., - . ,<br />
.-<br />
. .<br />
-_
e found in the area of in:&personal behavior. One has to do with instruc-<br />
tor training, where at leas: traditionally the military instructor trainee<br />
is evaluated by having him prepare and deliver a block of instruction during<br />
which he is judged on such process fac:ors as: “stood erect," "had good<br />
eye contact with audience, )( "conld be heard in the back of the room," "used<br />
visual aids effectively, (t "covered all points in the lesson plan," etc.<br />
Although thc.pro&ct of instruction clearly is student learning, I believe<br />
i: is scldon if ever used 3s tke critericr: for qualifying an instructor<br />
trainee -- probably because 1. '* vould icvolve a more time cons-x,ing ar.d inprac-<br />
tical method of- evaluation. iu?other very sinilar exaqle which cmes CO<br />
mind pertains to a recrui:er's task of delivering a persuasive speech to a<br />
student audience. If the product of this task could be measured it would<br />
be in terns of the number in the audience echo later contact the recruiter<br />
with an interest in enlisting. i3ut, again, because of its implaussbility<br />
as a measure of student achievcent, product gives day to process and the<br />
. .<br />
recruiter traxze's persuasive speech is evaluated in -zxh the same way as<br />
va.s described above for instructor trainee.<br />
Dealin< with Pr&lerzs of ?rcc?ust Yeascrecmt<br />
I'm sure that those of you involved in performance testing can think of<br />
many more instances in which product measurement is not used. Clearly, some<br />
of these are justified by cost or safety considerations -- but others are<br />
not. hhich bricgs us to tke ccnrral point of my rcanrks today. I bcllcvc<br />
that test developers often faii to see the inportamce of measuring task<br />
outcome; or perhaps they merely slight chc iqortaxe when fated with prnc-<br />
tic31 litzi:a;,ons in its LstxCn:cnt. h%.l:cvcr tlrc motivation, I bclicvc<br />
they do not strive nearly kard cslough to overcome rcsL;‘yce problems which<br />
254<br />
3
constrsin attempts to measure task product, and too easily give in to the :<br />
simplistic approach of measuring task process. The overriding cjuestion that<br />
a test designer should ask himself in this situation is: If I use only a<br />
Trocess xasure to test a xxx’s nchievezzent on a task, how certain con I be<br />
frcn: this process score that he would also be able to affect the product or<br />
outcme of the tr;ok? 1;here the degree of certainty is substantially less<br />
than that to be expected fro3 coml ceasuren;ent error, the test designer<br />
should pause and reconsider whys in r;bich time sud resource limitations cm<br />
be cozpro=li;ec! in schicveing at least aa approxitation to product neasurecent.<br />
Although there will remain instnnces in which product measurement sinply<br />
camot be achieved, ue will discover oacy others where, through sorze inagina-<br />
tive thinking, we can devise s-.., i-ulatiocs that will enable us to assess task<br />
outccne in a note relevant fashion.<br />
Ir. testing the student instructor, for instaxe, I see no compelling<br />
reason c;hy we shculdn’t get away fro3 the "chdrn school" 2pproach to evalua-<br />
tion. Cny not sisply have hi-, corxduc: 2 brief instructional session for 2<br />
snall group 05 stndr?n:s iperha;ls his peers), ui:h his achievement beicg<br />
neasured in term of whether his students have accoqlished the instructional<br />
objactive? In the c2se of the recruiter traiaee's speech, evaluating task<br />
prodcct is core difficult; but surely a masure closer to task outcome<br />
could be achieved -- perhaps a ptid student panel representing the potential<br />
audience could be ezqloyed to v kw and rste the np?eal of videotaped trainee<br />
speeches. In zeroing in on critical cotor skills, such as those involved<br />
in extracting a ~axzcd round fro3 a weapon or fn coatrolling bleeding from<br />
a wound, it uouid see3 that relatively low cos: sizulators could be devised<br />
for use in tcs:ir.g :ask outcome. Had-to-hazd combat very likely represents<br />
a case in whicS ultixte task product simply c2mot be xzeasurcd. However,<br />
255<br />
,
in a similar vein, the Amy is now experimenting with an intriguing method<br />
i :,<br />
of assessing the outcome of an infantry squad combat exercise. The principal<br />
feature of the method entails each participant havfng'a number printed on<br />
-his helmet and an inexpensive scope mounted on his rifle; then, during the<br />
, 1<br />
course of the exercise a soldier may "kill" by correctly reporting an<br />
enemy's number, or "be killed" by allowing his numbe! to be sighted by the<br />
enemy. :;umber size and scope power have been carefully calibrated from<br />
empirical data SD that the probability of a simulated "kill" is highly<br />
correlated uith t'ne expected ou:come in acturl battle. Ynis is an excellent<br />
example of an innovative method of achieving product neasurrEent on a task<br />
that here:ofore had betin subject to process evaluation.<br />
Obviously, from these eszcples ve can see that the accomplishment of<br />
product measurement is no: always a simple matter; but it is a demanding<br />
and essential goal to be pursued by the performance test developer if his<br />
products arc to be relevant to real world behavior.<br />
I<br />
256<br />
i<br />
I
BRIEF CF<br />
FCL’R RESEARCH STUD?ES USItX<br />
THE SFLF-FVALUATICN TfCHNIQUE<br />
(SET STUDIFS)<br />
US Army Crdnancc Center and School<br />
Doctrine and Training Development Givision<br />
hJrFOSc of t.he SET Studleg<br />
The purpose of the four SET stxeies was to determine if student stlfevaluations<br />
of their performance tests improved student ptrformar.ces on<br />
required gerformance tests.<br />
Procedure of the SET Studies<br />
This section descrihrs the subiects, course requirements, testing<br />
it,;trumrnts used. hou the tests were administered, and how the scores vtre<br />
obtained.<br />
The subjects used in these SFT studies numbered three hundred and<br />
fifty-four Ftudencs vho performed tutnty-seven hundred and four performance<br />
tests. Tt-ese studtn:s vert enrolled in the L&C20 Vtldtr Course, 61ClO/20<br />
Fire Control instrument Repairman Course, 6LE20 Hachfnist Course. and 63C70<br />
FIJQ~ and Eler’tric Systems Repairman Course. These courses were conducted<br />
at the US Army Crc’nancr Centrr and School.<br />
Course Recuiremcnt s<br />
Gnc of the prereouiFites for each of thQ courses IS as fsllovs:<br />
Weltier. General “aintenance (G?!) score of 90, Fire Control Instrument<br />
Repairman. General ?!aintenance (G’I) score of 100, Machinist. Central<br />
Maintenance (C!f) score of 100. an8 FUPI an+ Electric Systems Repairman,<br />
Yotor !reak t?ous several of the performance tests so that some of<br />
the require? tasks coulci he tested at a prescribe+ ccsttnp station. A<br />
COPY of thic instrument is attache8 as Appendix E.<br />
257
For the evaluation of tht Fire Control Instrument Kepairaan and the<br />
Apprtnt ice Hachinlst a separate performance evaluation Instrument was<br />
designed to cover each of the performance tests. Copies of there instruments<br />
art attathtd as Appendix C and D.<br />
<strong>Testing</strong> Procedure<br />
Several classes were used to validate. all self-evaluation instrumtnt<br />
5. After the instruments wcrt validated, students who did not make<br />
self-evaluations were dtsignattd as the control group and students making<br />
self-evaluations were considered to be the experimental group.<br />
The SET test project dirtctcr matched each student in the control<br />
group vith a comparable student in the txptrimtntal group using the GM<br />
or .% scores as a basis for equating these matched pairs. In some<br />
instances it was impossible to make comparable matches for each of the<br />
matched pairs vher. matching the students by classes. The overall results<br />
on the total of matches for each course showed a very close mean for the<br />
matched pairs.<br />
The experimental groups made a self-evaluation for each of Cht ptrformanct<br />
tests they performed. k technical ly qua1 if fed grader evaluated<br />
each performance test performed by both the experimental md control<br />
groups. ‘rhtst evaluations wert forvardtd to tht SET study project director<br />
who scored the self-evaluator’s score sheets. the grader’s evaluation of<br />
the self-evaluators’ score sheets and the pradtrs’ evaluations of the<br />
non-self-evaluatiqns. The student’s self-evaluation and the grader’s<br />
evaluation sheets were a’ttachcd toEtther and returned to the students<br />
4th tither critical or complimentary remarks which were influenced by<br />
the scores they made on that performance test. X:0 score was rtcotded<br />
on the returned student or prac’er’s evaluation sheets acd the comments<br />
reflzc:cd only his weak or strcnp tasks on rht Ftrfornance test. The<br />
Fral’ers’ scores for the non-self-evaiuators wtrt recorded by the project<br />
2irectar opposite his match from the self-evaluators.<br />
The performance test scores for each of the matched pairs were used<br />
to test for a sign! ficant difference between the means of the control<br />
and experimental groups.<br />
For the purpose of evaluation the SET studies were designed to test<br />
for significant tiifierenct bervetn tSe grac’crs’ scores for students uho<br />
made se1 f-rvaluat ions and the praders’ scores for students who did not<br />
make se1 f-evaluat ions.<br />
258<br />
.
The secret of the set studies vhich I am going to present can be<br />
attributed to direct communication and automatic feedback.<br />
r<br />
To show you the importance of proper and correct communicatfvt<br />
procQdure T vould like to rtac! a copy of the registration form 1 filled<br />
out when I registered for this military tQstinp association symposium.<br />
I<br />
sow 1 will briefly cover the need for conducting set studies. John<br />
R. Carrolls article “XQRlQctrd Areas in Educationai Rrscarch” prrsented<br />
in numhcr 42. Flay 1961 issue of :tt:& Phi <a Rappan s:atQd that<br />
“Research has told us littlt about the role of motivation in school<br />
1Qarninp.” ‘Lrt us take for Rrantcd that notivat ion is the sense of<br />
villinpnQss on the part of thr Learner to rppage in learning.” Studies<br />
of both intrinsic and rxtrfnsic motivation arQ nerdec’.<br />
S. L. PrrssQy and F. P. Rohinson StatQd in their article “Psycholo} and<br />
the New F&cat ion .‘. Informing? students rrpardinp :teir work is QSSWtit.<br />
for intQlliRtnt learninp. If rral lift tests are brought to the point<br />
where both stutient and teachrr can SQ informed about proRrQss in thr<br />
devrlopmcnt of social adjustments, interests and attitudes this would<br />
he splrnded.<br />
I<br />
The only study which I could find that ‘rven vaguely resembled the<br />
SET studret was located in thQ twcntp-ninth yQarbook of the Narional<br />
Society for the study of Qducation and was published in 1930.<br />
I<br />
THE DESICX M! RESULTS OF THE STUOY ARE AS F;OLLGXS:<br />
Two Qqui\*alQnt proups?of 35R Fourth grade pupils vert given identical<br />
arithmetic drill cxercisos for 15 minutes a!week for twenty-one uecks.<br />
The only variable was sprcific knovledcr of itorovement. The menbcrs<br />
of.the control Rroup never rccQivQd their veekly scores. Those in the<br />
PYpQrfmQntaL ClaSsPs kept int!iVidual ;roprrks charts and pooled their<br />
losses or Rains in a graphic rcpr.*srntatfon of improvcmcnt of the class<br />
as a vhole. ThQ rccor? of actual frnrrcvrm, dt<br />
was made possible by using<br />
r’rtll units for whfc?; rompara’\lQ stanc’ards $a2 been provided. T!:is<br />
scoriny c’evicc compcnsa:Qd for dtffQrencQ 3; difficulty of the tasks.<br />
ThQ Rronp yahict. had contfnuous informa: ion /conccrninF improvement<br />
mar’c sipnificantly prra:rr pains than the rcntrol proup.<br />
This in$tcatts chat if %Luitn:s are $novl~&able of \;hat resuiremcnts<br />
are Qxpccted of then ant’ rhev are made avpre of their progress there<br />
should bQ an improvQmQnt i? their overal 1’ pcrformancc. -.<br />
?lcase refer to the self Fvaluaticn instrument for the r-elders.<br />
(Arpc”c’ix A;<br />
Several classes wcrc USPC! to vaiida:e this instrument. ihc first,<br />
second an2 third instruments vcre discariec’ and this instrunen: Las<br />
259<br />
- . -. ,<br />
: !,<br />
; .
adopted for the evaluation. illis same instrument can bc used for all<br />
pi$-ht wQl
Please refer CO Appendix E which shows the final results for the four<br />
studies. It shows that the self evaluation technique vas successful for all<br />
the studies conducted. A copy of the complere study can be obtained upon<br />
reai.es+ ty vritinp:<br />
Dr. John J. Ilolden<br />
US Army Ordnance Center h School<br />
Al-TN : ATSL-CTD-Df-P<br />
Aberdeen Proving Ground, ?!aryIand 21005<br />
261
_<br />
AI’PFKDIX A<br />
!<br />
EVALUATION CF STUDCNT’S<br />
VELCING PRCJECT<br />
!’<br />
STUDENT’S NAME<br />
HAkK AN (X) IN THE<br />
PROPER SPACE<br />
CLASS NO<br />
PATE<br />
I<br />
STUDENT SELF-EVALUATIC?;<br />
CRADFR F”ALUA&CN<br />
- -<br />
7:ZRCCUCTlCN : ihis is a romplct’e velding project vhich will indicate how well<br />
you feel you performed the important tasks for a<br />
VPld.<br />
The purpose of this experiment is to obtain illformation that vi11<br />
help us to improve the vrlding course. You c;iIl not receive o -<br />
grade on your velding project but an honest evaluation of your<br />
performance vi 11 be of a great help to US.<br />
DIRFCTICW : Read each item on the task list and mark an (Xc) in the column<br />
which shovs how well you feel you performed that task.<br />
I<br />
rolunn A Column 3 Column 2 Column 1<br />
Sot quite Far Rclow<br />
I<br />
Eoual to Performancc<br />
of<br />
Ecual* to<br />
Ferformance<br />
Fqual to<br />
Performance<br />
I the Perfor- --I<br />
mancc of<br />
TASKS PFRFCKNFI’ . Fetter than of Average of AveraRe Average<br />
Avrrapp Stu- Student; Student . Student<br />
r’rnt !.+c I r’er ‘Ie I de r .‘Jrlder a Kelder<br />
1. Cbtained rorrert flame<br />
adfustment (current sef t inp 1. 1<br />
I<br />
2. Chtainrd pnod fusion .<br />
(tfnnfnC act ion).<br />
3 . . Cbtafr.eZ pooti crmm ora<br />
wrlf! face.<br />
(L. Cbtainr’d poo0 penetration<br />
(bonding).<br />
I<br />
5. Rept head free of cracks.<br />
6. ‘Chtained even unifor,n<br />
/<br />
head shape.<br />
,<br />
7 . Kept head free cf undercut<br />
fdrople:s).<br />
P. h’ert head free of overlap.<br />
ii<br />
Q. Kept bpaE free of burner’ //<br />
or Crvstall ttec’ meta!.<br />
10. Kept brad free of holes.<br />
Could finish projert in time allotte Y’r s :
APPESDIX B<br />
PERFORMASCE TEST 63c20<br />
DKLIVFRY VALVF 5i’RIKT. TEST<br />
STATICS 1 Checking I%1 iwry Valve<br />
Pate<br />
0 Student Self-rvaluation<br />
17 Instructor Fvaluatton<br />
1. Connectre the nczxle tcstrr at?? prrcate? tbr ~II-p for testinp the delivery<br />
va!ve sprtnps wi:h:<br />
D Ko trouble - c<br />
Received instructor’s brlp to ;rerfcm operation: u<br />
i-i::!* trouble u Lots of trouble<br />
‘ITS 0 SC<br />
if I’Zi is rhrckc?, ho-.. many times t’i8 tt*e instructor help? /I<br />
1. Chservec’ 0penir.r pressure of all $01 ivery valve springs. recorded reddin?s<br />
and determiner’ ronditior. nf vaives:<br />
9 .<br />
m C valves corrertty 0 6 cr 5 valves correctly u 3 or less valves<br />
corref t ly .<br />
Rccrived inst.ructor’s !brl~ to perfcrs operation: D“ xl.<br />
lf YT5 l= cbcckcc’, ho0 many tines Z.iZ tbr instrurtnr help? 0<br />
iverall performance for *i>is ctatiop:<br />
0 ASow averape stur’ent perfcrmance D<br />
Fqual to average student<br />
performance 0 blo:’ avcraj-ctstu2ent performance D Far l~rlow average<br />
atuc’ent pcrformancc<br />
.-%, .-- . .<br />
.<br />
f .<br />
f .’ -, .<br />
‘I<br />
, .a<br />
.I. .,<br />
.d ~b 4 i f<br />
263<br />
3
Student’s same<br />
Class 41<br />
7.<br />
3 .<br />
1.<br />
L.<br />
APPESDIX I!
. .<br />
Student ’ s Same<br />
Class I’ ratp<br />
1.<br />
1.<br />
3.<br />
AI’l’l?‘SCIy E (Cant)<br />
PERFORMASCF TFST 63C20<br />
I<br />
f<br />
STATICS 7 Timtnr the L’nit Injectors<br />
.I<br />
D Stut?e?L S e If-Evaluation<br />
n Instructor<br />
rosttionec’ cnpinr f o r timing of the! selected injectbr correctly:<br />
rvaluat ion<br />
n ho trouble m Little trouble D Lots of trouble<br />
Keccived instructor’s helIe to perform operation: D YES D :ic<br />
If YES is checked. hw many times did the instructor help? n<br />
Selected correct timinp gape and ad.justed injcctor CO specifications with:<br />
n So trouble n Ltttlc trouble n Lots of trcuble<br />
Eeceived instructor’s help to perform operation:<br />
If YL’C is checked, hov many times did the instructor help?<br />
(:veraI 1 performance for this stat ion:<br />
0 Above Bvrragr stuCenc performance u I!qual to average student<br />
I ..’<br />
performance DP~lov avrragc student perfoknce a F a r belw<br />
m I<br />
avrrape stu*ent perforranre 1<br />
Tic! you have cnouph t i-e to cot,*lete the test required at each of the 3 sta:ions:’<br />
0 YiC 0 SC. Tf ancvrr i s l+-rrtc ::O. reason(s) on back of this sheet.<br />
265<br />
I<br />
1<br />
-.
APPENDIX C<br />
SIXTH PERFOR~NCE EXAMINATION 41c10/20<br />
Maintenance and Adjustments of Panoramic Tclc~~~p~ HI15<br />
MARK AN (X) IN THE<br />
I: STUCENT'S NA!! PROPER BOX<br />
CLASS NUMBER STUDENT SELF-EVALUATION 0<br />
PATE<br />
GRADER EVALUATION 0<br />
PURPCSE : The pxpose of this self-evaluation experiment is to obtain information<br />
that will help us to improve the fire control instrunent repair course.<br />
You will not receive a grade on your self-evaluation for this project<br />
_ but an honest evaluation of your performance will be of a great help to<br />
us Co evaluate the course. Your course grade will be determined by the<br />
score you obta’n from the department’s performance examination.<br />
DIRECTION5 : Mark a J in thr TOLFRANCE RASCI: CC!lJ'MN for the TOLERMCE which YOU<br />
obtained on each of the listed dimensions to be checked.<br />
,“L,,nT\,.“L ..a..-.-<br />
CIHENSIONS<br />
TO BE CHECKFI' Column 4 Column 3 Column 2 Column 1<br />
Perfect ;;ithin Uithin’ Over<br />
1. Parallelism of 1 Mil # Hi1<br />
reticle and FW 0<br />
Perfect<br />
0<br />
Over-Under<br />
I<br />
Cver-Under<br />
E<br />
ever-under<br />
l/S Hi1 h Hi1 # nil<br />
2. Mapnificetfon o f FCV 0 0<br />
Perfect Within Hitnin . Cver<br />
1 Mil # Mil -. # Hi1<br />
2d. Collimation 0 Cl-J cl<br />
So Paral 1 ax 1 Hi1 # Mil Cver 3 xi1<br />
4. Paral lax El 0 El I<br />
Perfect Within Vithin Over<br />
1 Diopter # Dtoptrr Ii Diopter<br />
5. Eyepiece Focus 0 1 0 I I<br />
Within Within Within Over<br />
6. Adjustment of 1. Mil 3 Nil 3 / 4 :*I i 1 3/t Mil<br />
Counter Htchanism El El I 0<br />
Perfect Vithin 1 Mil ‘Lithin % Mil Cvcr # fiil<br />
7. hack1 ash 0 0<br />
R. Sideplay<br />
F<br />
El ! I<br />
9. Lift I n 0<br />
ver,Q;d<br />
Good Fair Poor<br />
10. Cleanliness of ‘0 I 0<br />
Optics No Assistance Assistance ksistance<br />
Ass{ st ante 1 T i m e 2 Times More Than<br />
11. Instructor<br />
2 Times<br />
Assistance 0 0 1 CJ<br />
Above Eelou Far belov<br />
12. Fvaluation of<br />
Completed Pan Tel<br />
.?verape<br />
0<br />
AveraRe Average<br />
El<br />
I<br />
Average<br />
0<br />
><br />
.
.\PPmDIX D<br />
FIRST PEZFCR.WCE EXMIr\‘ATICH<br />
EVALCATTOH CF SfUCE:.‘T’S<br />
?W-ZHIKIST PROJECT<br />
SHh.I’m TERFGRMCE TEST<br />
!+ARK AH (X) IB THE<br />
PROPER B(;X<br />
CLASS NUMBER STL!EKT SELF-EVALUATIGS 0<br />
PAW<br />
CRACER EVALUATI- D<br />
PURPOSE : The purpose of this sclt-evaluation experiment is to obtain information<br />
that will help us to iqrave tne makhfnlst course. You will not receive<br />
a grade on your self-•vaiuation for this project but an honest tvaluation<br />
of your perfomancc vi11 be of a great help to US to evaluate the<br />
. c o u r s e .<br />
PIRCCTIONS: Yakc a J i n the T3’Z%Z;CE RAKE COLl?G f o r t h e TCLERASCE wh?.ch you<br />
obtained on each of the listed dimensions to be checked.<br />
cI?fFNsxcss<br />
TC BE CHECKE!:<br />
2. ‘;tdth<br />
3. Cepth from Top<br />
of v to Clot<br />
L’idth o f Ptrst<br />
Step<br />
5. :Jfdth of Second<br />
Step<br />
6. Length o f Block<br />
7. Linear Pitch<br />
P. ‘didth of V<br />
9. Drpth of Teeth<br />
IO. Grncral<br />
Appearance<br />
Could you finish pro<br />
you i;dn’t have enou<br />
DIXESS 1c::<br />
1,750”<br />
1.75(3-<br />
0.625.’<br />
0.625..<br />
0.625-<br />
2 - ?/L”<br />
25/6L”<br />
1 - 3/!c’-<br />
0.770”<br />
ct in ti-e<br />
time. Ii<br />
Column 5%<br />
f 0.010” -~<br />
I<br />
:<br />
I<br />
0<br />
etccr Than<br />
Averape Averare<br />
I n<br />
- Colc!an 3 1 Colurn 2<br />
-I- 0.012”<br />
0<br />
I<br />
0<br />
I<br />
t I/32”<br />
- C-006<br />
- 0.001<br />
El<br />
0<br />
I<br />
7 0.007<br />
- o.OO2<br />
I<br />
Belaw<br />
Average’<br />
I<br />
Column i<br />
Above or<br />
Below T 0.010’<br />
cl<br />
0<br />
El<br />
cl<br />
I<br />
Above or<br />
below + 3161”<br />
0<br />
I<br />
I<br />
Above f 0.007<br />
Belov - o.oc2<br />
EJ<br />
Far Below<br />
Average<br />
I I I<br />
Iilo::r2 Y-zn :1-. if answer is SC; tell why<br />
you finish aiieai of time YES=.<br />
267<br />
I
1 LLI:70 ?lar!Iinic: curse<br />
I-<br />
.<br />
- - -<br />
7 . .<br />
S o . Xa:ch,l fairs 31 : L er.l‘or.rancc ‘Lcsr S Iotai So. i’erf o*mancc? ;‘P St S {a?&<br />
fipnif icanr<br />
1 iffcrencc<br />
!‘T<br />
.if tr.<br />
:;.: 1<br />
Signif icartf<br />
i if ferv+?ce<br />
!OC.~. .Cl -S!>F El’!. 1 4C.h S o n r ;I<br />
268 I
. i<br />
_ . ..__.....__ “__ ._ _ ____._..._. . ,.., . ..y. -,-..-I . . ..-. -.-...,- -“-.-l.l.. .C..,i_.__C.-.U--_..<br />
-._.- _I I - -\ .<br />
Cr ..- ~.I- .-...<br />
i. ._,_ __, ~_, ,,.<br />
/<br />
. . . . ,.. . _. . . .<br />
I<br />
. . , . .<br />
.’ : s<br />
., :<br />
i!<br />
A MODULAR At'PROACh 10 PKuFICIENCY TESTING<br />
Roticrt W. Stcphcnson, Ph.D.<br />
Warren P. Davis, Col. USA (Ret.)<br />
llarry I. fladlcy<br />
AMERICAN INSTITUTES FOR RESLIRCII<br />
. and<br />
Nrs. Bertha tl. Car)<br />
U.S. ARM RBSEARCH INSTITUTEiFOR THE<br />
BEHAVIORAL AND SOCIAL SCIENCES<br />
September 1973 ,<br />
From 3 paper presented at the Fifteenth Annual<br />
Confcrcncc of the <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong>,<br />
San Antonio, Texas, 28 October to 2 Sovcmbcr 1973<br />
.<br />
269<br />
/
44<br />
INTRODUCTION<br />
A new, more specific laqguagc for describing work activities is being<br />
designed for the Army on an c-xperimental basis.1 The new language is<br />
based upon a concept called the “duty module”. Lkty nodules are clusters<br />
of tasks that tend to go together occupationally and organizationally in<br />
meaningful Kl\‘S.<br />
The need to evalunte the feasibility of personnel information systems<br />
1~1sed upon clusters of tasks smaller than a <strong>Military</strong> Occupational Specialty<br />
(NIX) \;as originally suggested by personnel at thr k-my Research Institute.<br />
The development of the duty module concept was a team effort involving<br />
staff pnmbers fr& the Army Research Institute (AHI) and the Arzerican<br />
lnstittces for Research (AIR).2<br />
Duty modules are rationally dcrivcd clusters of work activity based upon<br />
a detailed examination and grouping of task inventory or job analysis data.<br />
Attention is then given to ways in which these tentatively identified job<br />
content modules can be tcstcd against various available criteria of operational<br />
utility. One relevant consideration is uhcthcr job content modules<br />
can be used as field assignment noduIcs. Xnothcr possible application is in<br />
the area of requirement planning 3nd unit cffcctivcncss. A source of<br />
information here is data that can lx gathcrcd in conjunction with unit<br />
training and unit effectiveness excrciscs that arc performed in the field.<br />
The xord “module” -was chosen bccausc job activity clusters, like<br />
equipment components of the same nzmc, are meant to bc largely selfcontained,<br />
independent units of work. For purposes of occupational classification,<br />
a duty module is a cluster of tasks that apply without modification<br />
in a number of occupational classifications or specialties.<br />
1 The work MS carried out under Contract No. DAK-19-71-C-0004, “A<br />
Taxonomic 6392 for Future Information and Decision Systems”, and Contract<br />
Xo. DAK- 1%is-C-0041, “A Comparison of Officer Job Content Modules with<br />
Activity Grouping Implicit in Course Design”, awarded by the Army<br />
Research Institute to the American Institutes for,Rcsearch.<br />
2 Key pcrsonncl st XRI:wcre Nr. Cecil Johnson, who provided the initial<br />
guidance as the Contract Officer’s <strong>Technical</strong> Rcprescntativc, Dr. J.E.<br />
Uhlancr, <strong>Technical</strong> Dircdtor, and Mrs. Bertha Coy, who succ-eded Mr.<br />
Johnson as the <strong>Technical</strong> Rcprcscntativc.<br />
Key AIR staff members, in addition to Dr. Robert W. Stephenson, who<br />
was Proicct Director, included Dr. Robert Miller, who served as<br />
Consulting Scientist. Colonel Warren P. Davis, >lr. Harry I. Hadley,<br />
Dr. E&in A. Flciskman, Dr. Albert S. filicl\mnn, !4r. Clifford P. Hahn,<br />
Dr. Ronald P. Car\‘cr, and Mr. Albert Farina.<br />
t<br />
270<br />
2
.-- -.. _.. .-_.. . ..- .__ _<br />
,’<br />
Another form of construct vdlidntion is avnilable through examination<br />
of mission elements which arc activities thnt are used to design and evaluate<br />
the performsnce of drgnni z3tional unit.<br />
This presentation \*ill describe wo different kinds of modular cvaluation<br />
devices -- sets of t3sks performed by individuals, rind sets of tasks<br />
performed by, orgnnizntio1131 units.5<br />
INDIVIDUAL PROFICIENCY TESTS<br />
Every indiuidunl proficiency test and cvcry Arm)’ training test is<br />
already divided into spccinl component sections \iith ycparutc scores. Before<br />
going into detail ahout wh3t modular :omponcnt scores arc supposed to<br />
?o and supposed to look like, it is necessary to describe wh3t thcsc cxisting<br />
systems 3re like.<br />
ENLISTEU PERSONNEL PERFORMAKE EVALUATION<br />
This discussion .:hould’bc prcfaccd by noting that cvnlu3tion of the<br />
pcrformnnce of enlisted pcrscrnncl is 3n import3nt responsihility of every<br />
commissioned and senior noncommissioned officer in the Army. The rewards<br />
and punishments 3ssoci:ltcd wit!1 such evaluations give commissioned and<br />
noncommissioned officers the ncccssary control over enlisted personnel to<br />
maintclin and improve cffcctivcncss. In addition to this important supertiisory<br />
function, howcvcr . there 3rc 3 number of formal proficiency cvaluation<br />
procedures for cnlistcd personnel thnt nre conducted by various<br />
hcadqwrters. The most important of these, for purposes of this paper,<br />
is the U.S. Am\- Enlisted Evnlwtion Ccntcr, located at Fort Benjamin<br />
llarrison, Indianapolis, Indiana.<br />
3 These developments :lre described in detail in Miller, R.B. A Titxonomic<br />
Base for Future M;w:tigcncqt Infolm;ltion and Decision Systems: Thcorctical<br />
Background to the tksign of IkIt\’ Modules; American Institutes for<br />
Research, Kashington, 1b.C.) Technic31 <strong>Report</strong> AIR-23500-7/71-TR-2, July<br />
1971. (U.S. Army Behavior rind Systems Research Laboratory, BURL <strong>Technical</strong><br />
Rescnrch Note, in preparation.) ; and Stephenson, R.N. (Amcricnn Institutes<br />
for t;escnrch , h&hi ncron, D.C. ) A Tnxonomic Base for Future Management<br />
Informntion and Decision Systems: A Common Languurtge for f&source rind<br />
Requirement Pltinnlng;-KS. An!~y Behavior and Systems Rcsc3rch Laboratory,<br />
Arlington, Vs., <strong>Technical</strong> Rcscnrch Note 244 (AD-757-794)) October, 1972.<br />
271<br />
/ . .<br />
,<br />
‘,
The Enlisted Evaluation Center is the major oper%\ing clcmcnt of the<br />
formal enlisted evaluation system. It ~3s established in 1958 as a Class<br />
2 activity of the Army. Its primary purpose then was to help the Army<br />
manage the proficiency pay program, which had been esttiblishcd in response<br />
to the rccommcndations of the Cordiner Committee--the LDcfcnsc Advisory<br />
Committee on Profcssionsl and <strong>Technical</strong> Compensation-Fin 1956 and 1957.<br />
Monctnry incentives wcrc one of the Committee’s proposals designed<br />
to improve personnel retention and job motivation among trained technical<br />
specialties and, at the same time,, stimulate higher quality performance<br />
among 311 enlisted personnel. Proficiency pay, as a concept; emanated<br />
from this Committee recommendation. Ilowcvcr, an underlying principle of<br />
this concept was that proficiency pay must bc directly rclatcd to the<br />
dcmonjtrntcd lcvcl of proficiency and must bc contingent upon periodic<br />
checks to ensure maintenance of that proficiency. The Army enlisted<br />
evaluation system was developed to meet +.his rcquircmcnt.<br />
ARMY ENLISTED EVALUATION SYSTEM<br />
The cnlistcd evaluation system consists of two major components:<br />
(a) evaluation of the enlisted man’s knowlcdgc of the various duties that<br />
arc rcquircd at his skill lcvcl in his NOS. as, indicated by MOS evaluation<br />
tests and performnncc tests; and (b) evaluation of performance in the<br />
currently assigned duty position, as indicated by supervisory ratings on<br />
the enlisted evaluation report (see Figure 1). , A rating system is applied<br />
to the scores obtnincd on thcsc instruments, and it is used to compute a<br />
composite sccrc for taking individual personnel actions. This MOS cvaluation<br />
score indicates the individual’s relative standing nmong thdse evaluated<br />
in the same MOS and skiJJ level and in the same py grade. It is<br />
used to verify MS qualification, to assist inj determining promotion eligibility,<br />
to award proficiency p3y, to guide remedial training, and in a<br />
variety of other pcrsonncl 3ctions.<br />
PROFICIENCY TESTING "AREA &ORES"<br />
The characteristics of the MOS proficicndy testing program will not<br />
be detailed hcrcin, but one particuiar aspcct’of the ?X)S‘proficicncy test<br />
program directly rclcvant to this paper vi11 bc considered--the MOS “major<br />
area” scores. Ikuzh MOS evaluation test is or,ganized into six to nine<br />
major nrcns; that is, six to nine subscorcs .I The six major 3rcas for an<br />
Infantry scnicr sergeant, for cxamplc, arc wcapors, tactics, field activi- -<br />
ties, unit dcfcnsc, administration, and personnel accounting. Study<br />
references from Army regulations, pamphlets, field and technical manuals,<br />
and other manuals arc coded to each of the major areas in an accompanying<br />
study guide so that each soldier can locate the print.ed materials upon<br />
which the test is b3scd. Ile cay study these reference tcntcrials to improve<br />
his knoulcd!:c and performance.<br />
272
,c<br />
..,‘, r<br />
TEST EER<br />
scol~1s SC’O!W<br />
.\lINI.\!U.\I MULTIPLE<br />
--I_-<br />
HURDLE SCORE<br />
Figure 1. MOS Evaluation Score.a<br />
RAW CON I’OSITE EVALUATION<br />
SCOR 13 S C O R E ,#.I\.<br />
PrctPay -Combat MOS<br />
- -<br />
CONVERSIOS<br />
a From U.S. Army Enlisted Evaluation Center, Briefing Supplement; Indiana, USAZEC, 1971.<br />
SCORE<br />
,
The mcijor iIrc3J arc wcightcd nccording to the rclrttivc importance<br />
of the functions in the missions of 311 units thst are authorized dut)<br />
positions in the blOS skill level. .~nd nc; on the basis of the time rcquircd<br />
to teach the subject m:lttcr in form31 clnssroom courses nor on<br />
the basis of the number of pcrsonncl assigned or clxthorized for specific<br />
duty positions. The number of questions allocated to an 3rea out of the<br />
tot31 number of items in an NOS proficiency test indicates the weight<br />
assigned to thax area. Subscorcs for thcsc mnjor 3rc3s 3rc useful not<br />
only to the soldiers tcstcd, uho c3n USC the information to improve their<br />
performance, but. 31~0 to v3rj ous headquarters, to ccntrn 1 i-cd managcmcnt<br />
progr3ms, and to commanders for nnnnging 3ssignmcnts 311J training progrnms.<br />
DIFFERENT APPROACHES TO DUTY AREAS<br />
Unfortunately, there is nc consistent theoretical basis or consistent<br />
approach to the definition of these 3rca scores by test dcvclopcrs, trnining<br />
personnel, or rcquirwcnt ItIanncrs (see Tnhlc 1). Some of the 31~3s<br />
for which the Enlisted Evrnluntixi Ccntcr has dcvclopcd scores cnn bc<br />
classified 3s duty arc35 e .Thcs~ 3rc3 scores roughly correspond to subject<br />
matter are3s within 3n ?IOS. They nrc used to provide informntion as rcgards<br />
3n cnlistcd m3n's strengths and wcakncsscs in sclcctcd subject<br />
areas, and they arc associated *iith specific. subject matter rcfcrcnccs.<br />
Table 1. Differmt Approaches to Duty Areas<br />
Approwh<br />
MOS proficiency tests<br />
Army training schools<br />
Kcquircment plnllncrs<br />
Duty :\rcas<br />
Sclcctcd by test dcvclopcrs<br />
Idcnti ficd by systems<br />
engineering of trilining<br />
, Associntcd with ndditionnl<br />
skill idcntificrs<br />
If subject matter refcrcnccs hnppcn to be organized in terms of dut)<br />
areas, it is easier to find the apl~ropriatc references that need to bc<br />
studied; 110wcvcr, it is not csscntirtl. A snmplc list of major arcns in<br />
an NOS is given:<br />
1 Kcapons<br />
2 T3ctics<br />
3 Field hctivitics<br />
4 IJnit Dcfcnsc<br />
5 ,\dministr3tion<br />
6 Personnel Accounting<br />
The study guides list the rcgulntions and technical manuals for the vorious<br />
iwxs, rind no grc3t amount of effort is nccdcd to find the rcfcrenccs<br />
/<br />
274<br />
: .<br />
,
that correspond to the area in which a low score was received in the<br />
proficiency evaluation test (see Table 2).<br />
Table 2. Sample Study Guide<br />
Rcfercnces<br />
Army Regulaticns<br />
65-75<br />
210-10<br />
DA Pamphlets<br />
600-S<br />
672-2<br />
Field Manuals<br />
5-1s<br />
7-10<br />
Ma-j or<br />
Area<br />
It should he clear tha; the rise of area scores for proficiency tests<br />
was an important development in the design of MOS proficiency tests. Such<br />
feedback systems are an integral part of any sophisticated testing program.<br />
What, then, would be questioned in the design jof these area scores as they<br />
are used by the proficiency testing system? The point raised is not so<br />
much how the proficiency testing subsystem works, but the manner .in which<br />
thcsc area scores interface with other personnel subsystems in thc”Army.<br />
I<br />
The Army training schools, for example, identify major duty areas at<br />
great expense and with great difficulty, as part of their systems engineering<br />
of training process (see Figure 2) .I Systems engineering of<br />
training is a long, drawn-out procedure, involving detailed job analysis<br />
and the application of systems engineering pr)nciples and approaches in<br />
order to break a job into components and then,! select components for training.<br />
The job is also organized into “areas” !when a Program of Instruction<br />
(POI) is prepared. There is no consistent r&lationship bctwecn these PO1<br />
arca scores designed by the training people ‘and the area scores as used<br />
by the proficiency training people.<br />
Requirement planners are also interestid in a different kind of duty<br />
area. For example, the Army has additional skill identifiers (ASI) that _,<br />
are authorized for functional skills, which arc not consistently required<br />
of all the job incumbents in an NOS. An example of such an additional<br />
skill identifier is the ability to maintain a specific type of system<br />
(e.e., maintenance on the Hawk Guided’Nissilc Simulator, or the ability<br />
to work with specially trained scout ‘dogs). Thcsc different approaches to<br />
duty areas are not necessarily incompatible with each other, but they are<br />
all different. Khcn you have three different parts of the same organization<br />
-- and the Army is one orgnni:ation -- using three completely different<br />
275<br />
5<br />
6<br />
6<br />
I5<br />
4<br />
2<br />
1<br />
Y
-<br />
step 1<br />
PERFORM JOB ANALYSIS<br />
Identify job (Overview)<br />
-3 Develop task inventory<br />
step 2<br />
SELECT TASKS FOR TIL\ISISG<br />
Criticality to job<br />
Percent Performing<br />
> Frequency of performance<br />
1 Learning difficulty<br />
step 3<br />
PREPXRE TRAISING A.YALYSIS I<br />
Identify job conditions and standards<br />
Develop training objectives and criteria<br />
Sequence training objectives<br />
Identify evaluation points<br />
I Stev 4 I<br />
PREPARE TRAINING ?L-\fERIALS<br />
L e s s o n p l a n s<br />
step 7<br />
PROVIDE QUALITY COSTROL<br />
Intcmal feedback<br />
External feedback<br />
Step 5<br />
DEVELOP TESTItiG ZL4TERIXI.S<br />
-xEq<br />
Figure 2. 'Simplified Flow Process of Systems<br />
Engineering of Training.'<br />
“From Southeastern Signal School Briefing Supplement.<br />
Systems engineering of training at USXXSS. tindated.<br />
!<br />
276<br />
2<br />
- . . - - . _ ._ . . ..-<br />
.<br />
_<br />
-.<br />
w.
I . *<br />
language systems to describe the same kind of work, it is likely that<br />
there will be some unncccssary duplication of effort. This could be<br />
avoided if a common language could be designed for all three parts of<br />
the organization (e.g., the Army) to USC.<br />
THE DUTY MODULE<br />
A duty module is a group of occupationally interrelated tasks smaller<br />
than an occupational specialty. It is modular in the sense that it can<br />
be used as a plug-in unit to a variety of different occupational specialtics.<br />
Table 3 defines the module group, number, and title. Table 4<br />
shows an MOS duty module matris for Army military occupational specialtics.<br />
As one can see, a relatively small number of duty modules can<br />
account for seven different MOS. Notice that each of these military<br />
occupational specialtics has demonstrable similarity with other MOS.<br />
A ADMNISTIUTION<br />
Table 3. Module Group, Number, and Title<br />
A-l Performs general administration at unit level<br />
A-2 Performs unit supervision and control of personnel<br />
A-3 Establishes and operates a unit mail room . .<br />
A-4 Types, files, and performs general clerical<br />
operations<br />
B TRAINISG<br />
B-l Conducts or participates in unit and individual<br />
training<br />
C COMlUNICATIONS<br />
C-l Operates unit tactical communications equipment<br />
(excluding use of Morse code)<br />
c-2 Installs and maintains unit tactical wire<br />
communication systems<br />
D TRANSPORTATION<br />
D-l Operates unit combat support vehicles<br />
(continued)<br />
277<br />
.’<br />
! * . . .<br />
. .
. . . ‘* , . . . . .<br />
.- . . *. . . . : *<br />
.*.<br />
Table 3 (continued)<br />
E TACTICAL OPEKATIONS r<br />
f<br />
E-l<br />
E-2<br />
Prepares and employs maps, charts, and instruments<br />
in land navigation<br />
1<br />
Engages enemy with tank and Armor vehicle<br />
mounted assault weapons<br />
E-3 Drives tanks and associated Armor combat vchiclcs<br />
E-4 Emplaces, reports, and neutralizes tactical obstacles<br />
E-S Performs in mounted, dismounted, airborne or long-<br />
E-6<br />
range pat&-01s<br />
Engages enemy with mortars<br />
E-7 Participates in ground tactical operations 3s<br />
mcmbcr of a maneuver unit<br />
E-9 Engages enemy in close combat with individual<br />
weapons and machine guns<br />
E-10 Engages enemy with recoilless rifles and direct<br />
fire missiles<br />
E-11 Functions under CBR warfare conditions<br />
F STAFF PtkXAGEblENT<br />
F-l Performs tactical operations support duties<br />
F-2 Performs tactical intelligence support duties<br />
G MAINTENAXE .<br />
G-l Performs user maintenance<br />
i<br />
on individual and unit<br />
G-2<br />
equipment and wea$ons (excluding motor vehicles)<br />
Performs organizational maintenance on track<br />
and wheel vehicle mechanical systems<br />
G-3 Performs organizational maintenance on track<br />
and wheel vehicle electrical systems<br />
G-4 Performs maintenance administration<br />
ti FOOD SERVICE<br />
11-l Establishes and operates a fieId,mess i!<br />
11-Z Prepares and serves meals<br />
11-3 Operates a mess facility<br />
I SUPPLY<br />
I-l Establishes and operates a uni<br />
J PERSOSSEL<br />
J-l Initiates, posts, files, and retrieves information<br />
from personnel records<br />
J-2 Manages individual enlisted personnel and carries<br />
out manpoucr and personnel management programs<br />
J-3 Processes personal affairs actions for individuals<br />
. ’ 278<br />
I
Ii<br />
A-1<br />
A-2<br />
A-3<br />
B-l<br />
C-l<br />
u-1<br />
E-l<br />
E-2<br />
E-3<br />
E-4<br />
E-5<br />
E-6<br />
E-7<br />
E-9<br />
E-10<br />
F-l<br />
F-2<br />
Table 4. MOS -Duty Module Matrix<br />
Plilitary occupational Specialties<br />
11B 11c 111) 11E 11F 11c I It!<br />
x x<br />
x<br />
- x x<br />
x x<br />
x x<br />
s s<br />
s<br />
s x<br />
x<br />
s x<br />
x<br />
I<br />
x<br />
x<br />
X<br />
s<br />
x<br />
x<br />
X<br />
x<br />
X<br />
x<br />
X<br />
x<br />
X<br />
x x<br />
X<br />
X s<br />
X x<br />
x<br />
X x s<br />
x<br />
X<br />
x<br />
X<br />
x x X<br />
x X<br />
“See Table 3 for definitions of duty modules.<br />
It is also possible to use duty modules to express personnel requiremcnts.<br />
The list of work ‘activities in Table 5 has not been formally approved,<br />
but it i llustratcs tj& type of approach that can be used. Given<br />
a data processing group, there is a need to supervise, to plan the anslysis<br />
of the reporting, to keypunch, and so on. The number of full-time<br />
+ty positions needed in the organization is ten. You can also s!)ccify<br />
tnc requirements in terms of the number of people qualified to perform<br />
each work activity. If you have a computer activity with a lot of night<br />
shift work, you arc going to need at least three people who can supervise.<br />
279<br />
. -<br />
X<br />
x<br />
X<br />
. :-,<br />
x<br />
X<br />
X<br />
X<br />
X<br />
X<br />
X<br />
x<br />
. .
-. ..- - .~... _ . _ __,<br />
,<br />
. . ..- . . ..m- _-_, . ..-. . . - . .._ . ._<br />
Work Activity<br />
Supervision<br />
P13nning of cinalysis<br />
.’ and reporting<br />
Receipt and verific3tion<br />
of input data -<br />
COBOL programming<br />
Keypunching<br />
Comput cr operation,<br />
including peripherals<br />
Interpretation of output<br />
Frcpsrntion of reports<br />
Table 5. Work Activity Requirements 1<br />
Total Number of Kork<br />
Activity Rcquircmcnts 30<br />
Minimum Sunbcr of<br />
Fcople Required<br />
Xinimum Sumber .of<br />
Feoplc Se&cd<br />
with This Skill<br />
m i<br />
1<br />
5<br />
3<br />
I<br />
i -1<br />
/<br />
I<br />
fiumbcr (or<br />
Froportion) of<br />
Ful i-Time Dut)<br />
Fosi tions<br />
1.00<br />
9’<br />
.-a<br />
1.00<br />
2.00<br />
1.50<br />
5.00<br />
1.00<br />
3*<br />
.-3<br />
I 10.00 *-<br />
I<br />
You may want one or two people for hnckup, in case of illness or vacations.<br />
Even thou&t: there m3y only be three duty positic$,s involving supervision,<br />
you may w3nt five people to be qualified 3s shift supervisors. Simi.larly,<br />
in pl3nning the analysis rind the reporting, >‘ti~ m;ry w3nt three qualified<br />
persons, but it is only a quarter-time job. 1 d .othcr words, defining work<br />
activities in terms of duty moJuies Gin provide 3 more efficient use of<br />
personnel with all functions coverll, using 3 minimum number of personnel.<br />
I<br />
TESTS OF EFFECTIVENESS OF DUTY FIODULES<br />
Resource and requirement pl.uxning expert s must first 3grce upon the<br />
qualification requirements. Secondly, compatibility with work practices<br />
in the field is in\zolvcd. For this, actusl survey datn regarding the way<br />
in which tnsks are nssigncd in the field cnn be emI>lpye,l. A third test<br />
is to cvalunte the userulncss of the module in plnnnlng 3nd evaluating<br />
the requirements for rind pcrformxxe of orgrwixtional units.<br />
. :’<br />
:<br />
- “T<br />
i
TEST 1<br />
!<br />
t . . .<br />
.\ first test is to ask experts to Jcsign job content modules. Typical<br />
job content modules 3rc: ,<br />
(1) Operates unit tactic31 communications equipment,<br />
esrlluding Norse code.<br />
(2) installs and maintajns unit tactical tcire communic3<br />
t ion system.<br />
,In this case, the people who arc asked to operate unit tsctiz:\l sommunications<br />
cquipmcnt arc usually different from those kho inst:rll .ind<br />
mairtain it. Scithcr of these work activities, hoxevcr, is ;I iull-time<br />
posltion for :rnyonc. These are modular things rhat can be assigned to<br />
diffcrcnt ~wzoplc.<br />
TEST 2<br />
A second test of duty modules is conpatibilit: uith assi$nncnt<br />
practices in the field. Some data have already been analyzed (see ,igurc<br />
5) . Data WI-C us;ed that were already in csistenCc, and that h;td been<br />
collected riith task inventories that xcrc administered by the Army Office<br />
of Personnel IQwra t i on5 . The- data IMSC is called the Xi1 i tar?’ Occupations<br />
Dsta Cank, or >lOl~U. The original tasjr statements in !XX)B arc organized<br />
in terms of fuwtional areas of performance (see the administration and<br />
:raining colr~:~:ns on the right side of Figure 3). Ttrc row zorrespend to<br />
task clusters that were identified in an cqiricnl clustering of tasks.<br />
ss 15<br />
10 95<br />
Figure 3. Test 2, Compatibility with Assignment<br />
Practices in the Field (in Percent).<br />
's<br />
.5<br />
281
The Comprchcnsivc Occupational Data Arullysis Programs (CODAP)<br />
system, dcvclopcd by the hir Force, was csxnincd to dctt - Gnc it applicability.4<br />
This system was dcvclopcd to cluster pcoplc rnthcr than tasks, however. The<br />
Army Rcsenrch Institute modified the COIW system to tl:c problem at hand.5<br />
People were not clustered togcthcr on the basis of their similarity in terms<br />
of task performance, instead, tzsks wcrc clustered togcthcr on the basis of<br />
the probability that the tasks would be assiKncd to the sxz pcoplc. On the<br />
left side of Figure 3, clustering is compared with some duty modules that wcrc<br />
analytcd in terms of the “Obvcrst Cluster :\nnlvsis” system designed (see<br />
Tnblc 3 for definitions of uuty modules). The computer run suggcstcd that<br />
there wcrc seven task clusters in this particular group of tasks. The perccntages<br />
shop in the duty module columns on the left side of Figure 3 indicate<br />
the percent of tasks in each of the cmpiric,?lly identified clusters that fall<br />
into duty mcdule cntcgorics A-l through !t-2. The percentages shoxn on the<br />
right-hand side of the figure indicate the pcrccnt of tasks in the empirically<br />
identified clusters that fall into each of the administrstive arcas u .rd to<br />
group tasks in the <strong>Military</strong> Occupations Data Bank.<br />
It is il,,portant to note that the design of the duty modules is not<br />
complctc, nor is the preparation of the task statcmcnts. These task<br />
Statcmcnts vi11 be revised to reflect additional job data and evidence<br />
concerning the organization and application of command authority in field<br />
units. Task statements will also be revised 3s the duty module system is<br />
further dcvc loped.<br />
It does not necessarily follow that tlrcre will bc complctc agrccncnt<br />
with the computer runs. For cxnnrp lc , it is possible that the first three<br />
cmpiricnl task clusters xould bc considcrcd as really one cluster rather than<br />
three. Before’ that conclusion cnn bc mndc, many other occupationdl spcci altics,<br />
in addition to the one that thcsc data arc ba?scd upon, will have to be reviewed.<br />
Essentially, duty modules arc dcrivcd from many different occupational<br />
spccialtics rather than just one, lzhich is the cast with this particular<br />
ccmputcr run. The decision as to whcthcr these first three clusters arc one<br />
cluster or two or three clusters must be based upon data in other occupational<br />
specialtics as well.<br />
TEST 3<br />
A third test of the utility of duty ~.lulcs is the applicability to the<br />
evaluation of unit pcrforcancc in the field. Task checklist items have been<br />
4 Bottcnbcrg, R. A., and Christnl, R. E., “An iterative technique for clustcring<br />
criteria which rctnins optinnm prcdic:ivc efficiency”, The Journnl of<br />
Expcrinmtal Educcltion, 5 6 , (-I), Sumner, l!WS, 2S-34.<br />
5 Edison, I
devised based upon the duty m~dulcs. Each duty modulc,,consists of a collection<br />
of task statcixnts. A study of Army Training Tests (ATT’s) revealed that<br />
task statements could be .convcrtcd into checklist it&s, This provides the<br />
means by which information can be collected about thei performance of individuals<br />
during unit training tests. Separate scores arc obtained for separate<br />
practi cc maneuvhrs .<br />
Scparntc scores can ‘bc obtained for movement, defense,<br />
and attack, as noted in Figure 4. Each cxcrcisc will have a scenario, the<br />
possibility of casxllties, and so!‘forth. Duty nodules will be looked at during<br />
the appropriate phnsc of the Army Yraining Tests. They would not be tested<br />
during. ever)- sing lc phase.<br />
Unit Tc*st I’hasc<br />
Plodlllc Task Chccl, 1 i st I t cm blovcmcnt t _ fcnsc At tacti<br />
c- 1 Establish and opcratc<br />
field communications<br />
rclny station x s<br />
G-l Make cntrics in<br />
cquipmcnt lox books s s<br />
Figure 4. Step 3, Relationship to vrrit Performance.<br />
SUMMARY " * i<br />
I<br />
! . .<br />
These individual duty modules are dcsigncd to be dcrivcd from a variety<br />
of specialties rather than just one. They arc cconoaicol in the sense that<br />
many different task inycntorics can bc designed and jobs described with a<br />
small number of duty nodules. They can utilike task inventory data already .<br />
collcctcd on assignwnt and work assignment pkactices in the field. The)<br />
meet an apparent need fbr consistency at a lckcl of gcncrality between the NOS<br />
.z?d the task, at an optimum lcvcl of detail.1 Ultimately, if duty modules can<br />
provide a language xccptablc to both resow& planners and requirements<br />
planners, they will facilitate COmlmiCatiOJI and pronotc better matching of<br />
men and jobs.<br />
I<br />
PERFORbWCE STANDARDS .4NDrSKILL LEVELS<br />
It is not possible to talk about testing in terms of duty modules without<br />
first t.*lking about tcstins standards. I’crformancc standards in the<br />
Army arc pre,lared sysrcmatically as part of the systems engineering of trnining<br />
process iDr the dcsiy of Army school COWSCS. .At one point in systems<br />
engineering, task and skill analysis sheets arc prcparcd. lhcsc result in<br />
evaluation plans _ It is ;I standard practice to indicate specific performance<br />
standards for training purpos:cs for each terminal or facilitating objective<br />
on these cvalunt ion plans (SW Vi gurc 5).<br />
+<br />
283<br />
\
~hc evaluation plans only cover those areas of interest to the schooi,<br />
hL,:ever. They usually do not cover al 1 the tasks in 3n NOS, especially<br />
for the more advanced tasks, which the pcrsonncl arc supposed to learn on<br />
the job. The Enlisted Evaluation Center, therefore, has to supplement this<br />
school-oricntcd inforrxtion with other sources. Usually, they use the judgmcnts<br />
of knowledgeable KCOs who have csrcricnce in the particular MOS, and who<br />
formulate proposals rcgsrding Kh?t tasks are appropriately included in an MOS<br />
proficiency tcs t.<br />
1. Criteria for the Training Cbjcctivc dcvcloped for the task:<br />
Action Troubleshoot AS/?‘RC-24 . (A-??-1)<br />
Cond!~ion In addition to Standard Training Conrlitions, the<br />
- -<br />
student is riven an XK/TRC-24 with one major componcnt<br />
containing a DS- part dcfcct as weli as OS-8,<br />
N&30/U, TN 11-SS.!O-‘57-12, TN 11-5820-257-34, and<br />
AY/‘iRT-24 Block Diagram.<br />
St nndnrd The stuJcnt is qualified if, when given two defcctive<br />
XS/TRC-24s with a Z-hour tire limit on each,<br />
he can isolate one of the dcfcctive parts.<br />
Figure 5. Evaluation Planning Information Sheet.<br />
The proficiency testing system in the Army, as it is present!? organized,<br />
provides scpnrstc tests for each skill lcvcl within each MS. Occasionally,<br />
one test may bc used in tuo or three skill levels with different score lequircments,<br />
but the principle is the same. .3m reason for providing scparatc<br />
tests for diffcrcnt skill levc,ls is that an NOS is a broad collection of duty<br />
areas that cover many different duty positions. Providing separate tests for<br />
each skill 1~~1 makes it possibIe to provide items that are more appropriate<br />
for the positions being filled by those uho take the tests.<br />
This skill level approach, which is not incompatible with the duty module<br />
concept that KC ha\-c described, is appropriately used in connection with<br />
duty modules, and, further, it iilustratcs how useful duty modules can br: to<br />
those who design tests. Table 6 lists the tasks for 2 duty module of patrolling,<br />
cithtr mounted or dismounted. One’s skill lcvcl is dependent upon whether one<br />
supcrviscs a task, doe’; the task and also supcrviscs it, simply dces the task,<br />
or rshethcr one just assists in doing it. The tllrcc skill levels in the 1lB MO.5<br />
(Light Weapons Infant~nan) are indicated. The ski1 1 levels are numbered 1, 2,<br />
and 4. Thcrc arc only three skill levels in this particular blOS, so there is<br />
no skill lcvc 1 numbered 3. Table 6 indicates that people who 3rc at skill level<br />
4 are more like 1y to supervise. People who are at skill level 2 do not supcrvise,<br />
and arc much more likely to assist somebody. People who are at skill<br />
level 1 carry out the orders and requirements of their superiorc. This kind of<br />
information about skill Icvcl profiles could be cstremcly useful to Army<br />
organizations in dcsignins proficiency tests.<br />
!<br />
284 3<br />
. c . ..-*.- I. - . .<br />
,- -..’
Tasks<br />
Table 6. Duty Module E-5: Patrols, Either Mounted or Dismounted<br />
�<br />
(1) Plan patrol operations<br />
(2) Assemble, inspect, issue patrol<br />
order, and lead patrol<br />
(3) Operate listening or<br />
observation post<br />
(4) Serve in combat patrols<br />
(5) Serve in reconnaissance patrols<br />
(6) Serve in ambush patrols<br />
(7) Mark route or serve<br />
as guide for unitb<br />
(S) Participate in air search<br />
operations or air delivered<br />
patrol<br />
(9) Estimate charge, emplace and<br />
fire demolitions<br />
Performance Expectations for<br />
Skill Levels 1, 2 and 4 of 1lB NOS"<br />
Super- DO and<br />
vise SLlpervise<br />
I<br />
4<br />
4<br />
4<br />
4<br />
4<br />
Do<br />
Assist<br />
4 2<br />
4 2<br />
1<br />
2, 1<br />
2, 1<br />
2, 1<br />
2, 1<br />
2, 1 -'<br />
2, I<br />
;LThere art? only three skill levels in the 1lB blOS: 1, 2 and 4.<br />
b Task No. 7 (rsllen performed by an 11B NOS) is normally supervised b)<br />
someone in another NOS.<br />
ADVANTAGE OF A MOCULAR APPROACH IN TEST DEVELOPMENT<br />
Previously mentioned were the ways in which the Army military occupational<br />
specialties were modular. Refer to Table 4 and note that with<br />
this matrix one can account for many different tasks with a relativcl)<br />
small number of duty modules. Moreover, this matrix is just the corner of<br />
a much bigger matris. To date, 31 enlisted duty modules have been developed;<br />
they completely account for 16 different s-digit enlisted NXe.<br />
285<br />
.
To appreciate the possible savings it is necessary to translate this<br />
information on duty modules and NOS into test items. Say that these 16<br />
blOS would rcquirc 100 items apiece to account for them if they were developed<br />
independently by various test-dcvelopinp, agencies. J%at would make 1,600<br />
test items, if you used an independent approach to test development. Ke<br />
estimate, rhat it takes only 10 items apiece to desckbe a duty module.<br />
In other words, it is possible that 310 items can do essentially the same<br />
job as 1,600 test items. To be able to prepare 310 items rather than<br />
1,600 items reflects a considerable savings. It is contingent upon dcfining<br />
modules that cut across, and have the same meaning in, different<br />
occupational specialties. In giving these figures, WC have not discussed<br />
skill. lcvcls; but different tests for different skill levels would bc rcquired<br />
in both systems. Thus, multiply the number of items in our cxample<br />
by three or four to get the number of test items that would actually<br />
be nccdcd by the people who design these tests and work with them.<br />
MODULAR AP~Y TRAINING TESTS (ATTS)<br />
At the present time, the Army has several hundred Army Training Tests<br />
(;\TTs) for use in evaluating the perfornnncc’ of organizational units.<br />
Each of these tests has a scenario and provisions for referees who arc<br />
trained to follow people and take notes as regards their performance in<br />
the lmit test. The question to be posed her: is this: Kould it kc Jcsirablc<br />
for Army TraininS Tests for organizational units to be organized<br />
in the same ,modular fashion that has been proposed for indiy:idual proficiency<br />
tests?<br />
i<br />
. .<br />
P<br />
PURPOSE OF ORGANIZATIONAL UNITS<br />
Before further discussion of the feasibility of modular ATTs. the<br />
accuracy and consistency with which the i?tcbdcd purposes of organizational<br />
units have been specified must be considered/. The Army has several Jifferent<br />
terms for describing the intcndnd purposes of organizzational units:<br />
a primary mission, some functions, and a ca P ability.<br />
A primary mission is defined as the principal purpose that an organization<br />
is designed to accomplish. The funytions are the appropriate or<br />
assi.gned duties, responsibilities, mission:$, or tasks of an individual<br />
office or organization. A capability is dhc abi lit)’ to cxccutc a spcci- -<br />
ficd course of action. Further detai Is will not be discussed except to<br />
state that after studying the various mission, functional, and capability<br />
s:atemcnts in Army documents, it was concluded that the capability statement<br />
was the one that should be used as the basis for structuring organizational<br />
unit testing modules.<br />
286<br />
. . . . _ - _-.<br />
,
THE CAPABILITY STATEMENT<br />
A possible capability statement for a theoretical unit is shown in<br />
Table 7. It- is clearly possible to analyz c a capability in terms of specific<br />
component functions and operational criteria, as shown in the table.<br />
1. Title:<br />
Table 7. Theoretical Capability of a Unit<br />
_ Transport supplies and resupply itself.<br />
2. Essential Component Functions:<br />
a. Load, move, and unload unit loads of rations,<br />
FOL, ammo, and repair psrts.<br />
b. Repair minor vehicular failures enroute.<br />
c. If unable to make minor vehicul3r repairs, tow<br />
inoperable vehicles.<br />
d. !4ove unit loads on the road or cross-country.<br />
c. Pick up and issue supplies.<br />
.,<br />
3. blinimum Operational Criteria:<br />
a.<br />
b .<br />
C.<br />
d.<br />
e.<br />
f.<br />
6h.<br />
.<br />
1.<br />
Sufficient vehicles on hnnd in condition to<br />
move unit lo3d.<br />
Sufficient vehicles on hand in condition to<br />
pick up and deliver supplies.<br />
Trained drivers.<br />
Authorized m3ps and comp3sses on hand.<br />
Supply platoon trnined 3s a team.<br />
Trained supply personnel.<br />
Trained vehicular and radio mechanics.<br />
Satisfactory status of equipment maintenance,<br />
Satisfactory completion of ATT and FTX.<br />
4. Standards (To be developed):<br />
Includes minimum personnel, skills, operable<br />
equipment, ond training necessary to be considered<br />
C-1, C-2, or C-5 as defined in AR 220-l.<br />
StaRd3rds below C-S arc C-4.<br />
The capability of the unit is to transport supplies and to resupply itself.<br />
h’hnt is involved in this cep3bility are the functions of loading,<br />
repairing vchic lcs, picking up initial supplies, and so forth. The minimum<br />
operation31 criteria 3rc slso indicated. A lkpnrtmcnt of the Army<br />
287<br />
2
m.<br />
study of output measurement conducted in 196S6 shows the same conclusion<br />
that is arrived st here--that capabilities are a good way to structure and<br />
organize ATI’s.<br />
CRITERIA FOR MODULAR ATT EVALUATION DEVICES<br />
A number of criteria for an improved system of evaluating performance in .<br />
Army Training Tests have been formulated. Details are beyond the scope of this<br />
presentation, but the criteria and related recomn?cndations arc given.<br />
’ A series of clear, quantitative statements specifying the<br />
capabilities of n unit.<br />
� � taxonomy of unit capability statements. If possible,<br />
these statements should be modular.<br />
’ Criteria of unit effectiveness based upon and relatable<br />
to the taxonomy of unit capabilities, and consistent with<br />
and relatable to criteria for the performance of individua.ls<br />
in the unit.<br />
* Both kinds of criteria (individual snd unit) based upon<br />
performance standards rather than relative standing in test<br />
performance.<br />
� Varied performance standards depending upon situational<br />
conditions, such as terrain, percent casualties, and<br />
resource inputs.<br />
� Aggregation of standards for organizational components and<br />
generation of an overall indcs for the unit as a whole.<br />
� The emphasis in criteria statements upon end results rather<br />
than methods used to achieve the results.<br />
� Output measures allowing for the possibility in evaluations<br />
of corrective mcasurcs that may have been taken by command<br />
personnel, and that would permit the unit to meet standards<br />
in spite of some departure from cxpectcd procedures.<br />
� Scoring that provides specific information about the lcadership<br />
of 3 unit.<br />
6 Department of the Army. SJmprovcnent of Output Measuremen& <strong>Report</strong> of a<br />
Special Study by the Army Staff Coordinated by the Comptroller of the<br />
Amy, January 196s.<br />
.<br />
288<br />
.
SUMMARY<br />
’ Scoring that provides comprehensive evaluations of the<br />
unit when tested 3s 3n entire unit. .f<br />
w Quantitative weights assigned for sntisfnctori performance<br />
and deducted for in3dcquate performance that rcflcct the<br />
probable seriousness of the actions.<br />
� A method for rcl3ting output messurcs to input mc3surcs<br />
logically.<br />
A series of clcnr, quantitntive st? ‘emcnts are need4 to specify<br />
the capabilities of a unit. There is slso 3 need for a tzonomy of unit<br />
capability st3tements. I\ taxonomy is a theoretically-bnscd language<br />
that implicitlcv clnssifics or categorixs 3 capability statement at the<br />
same time t.hAt it describes it.<br />
Capabilities should nlso be modular in nature; that is, the capability<br />
statement for an Infantry battalion should be the same, if possiblc,<br />
3s 3 highly similar, closely related cnpability stxtcment for an<br />
Armored Cavn 1 ry squadron. If you can design the capabrlit!. statements<br />
thusly, and organize the testing accordingly, i;t is possible to design<br />
one modular unit trclining test component that would be useful for infantry<br />
battzllions and for kmorcd Cavalry squadrons. Ii.Kllll;~I~S Of possible<br />
modules 3rc night ground Gt3ck, rctrogradc movement, 3nd stat_ionar)<br />
defense. Since many different capabilities of ,differcnt kinds of battalions<br />
sre common, having similar tests proviJe many economic features.<br />
CURRENT WORK ON MODULAR ATTS<br />
Having proposed the design of modular Ar 1 ly Training Tests and proceeding<br />
to design some, field survey Icork was conducted in conjunction with Army<br />
Training Tests at Fort LeEis, K3sbington, in /August 1973, to pre-test the<br />
first versions of thcsc modular evaluation devices. These devices and data<br />
have not been analyzed or evaluated as yet, thus comments nrc based upon what<br />
was learned during the design stakes.<br />
d<br />
PERSONNEL CAPABILITIES<br />
In designing tbcsc evaluation devices, two things quickly bccoac<br />
apparent : (3) devices could not be designed for units based upon enlisted<br />
duty modules along; and (b) there was 3 need for officer duty modules. ,I<br />
contract already undcrr
jobs. Table 8 defines officer duty modules by area and module number.<br />
Table 9 accounts for all the major duties of an officer’s job with just<br />
a few duty modules. Each of the positions shown is completely accounted<br />
for by the officer duty modules listed. (The letter refers to the<br />
“areas” from which the nodule was taken in Table S, and the number<br />
identifies a specific module within that area.)<br />
Area Title<br />
A<br />
B<br />
C<br />
D<br />
E<br />
F<br />
G ,<br />
II<br />
I<br />
J<br />
K<br />
L<br />
bl<br />
x<br />
0<br />
u<br />
\\’<br />
x<br />
FF<br />
1111<br />
Table 3. Officer Duty Modules by Area<br />
Command Management, General Management<br />
and Administration<br />
Personnel<br />
Intelligence<br />
Operations and Plans (Staff)<br />
Organization, Training<br />
Logistics (Staff and Consumer Units)<br />
Communications and Electronics<br />
Civil-<strong>Military</strong> Affairs<br />
Comptrollership, Budget and Fiscal<br />
Army Aviation<br />
Research, Development, Test and Evaluation<br />
Operations Rcsekch nnd Systems Analysis<br />
ADP Nanagencnt and Programming<br />
Education, Instruction<br />
Information Activities<br />
Tactical Direction of Combat Units<br />
Miscellaneous<br />
Individuiil Functions and Special Qualifiers<br />
Logistical Services<br />
Supply and Maintenance Support Operations<br />
290<br />
3<br />
Sumber of<br />
Modules<br />
9<br />
4<br />
5<br />
4<br />
3<br />
9<br />
2<br />
3. ,*.<br />
2<br />
5<br />
2<br />
1<br />
1<br />
2<br />
1<br />
5<br />
9<br />
4<br />
9<br />
9
Table 9. Application of Duty Modules to Officer Positions<br />
Position<br />
Cdr., Infantry Rifle Co.<br />
CPT<br />
Duty Modules<br />
A-l, A-3, A-4, A-6, A-8,<br />
E-l, F-l, X-l, X-2<br />
Cdr., Rccsption Station Co. A-l, A-3, A-S, F-l<br />
. LT<br />
Asst. ?my Agtachc A-l, A-4, C-4<br />
LTC<br />
EQUIPMENT CAPAillLITlES<br />
In addition to information on the personnel capabilities of a unit,<br />
there was a need tclr detailed information about the capabilities of<br />
equipment. The nwd to think about equipment capabilities became apparent<br />
when prepa~lng a unit capability table, which described different<br />
types of capability S’or each component in a platoon (Table 10).<br />
Item of<br />
Equipment<br />
1. Antenna (AT-<br />
784/PRC)<br />
2. Armored Rcconnaissance<br />
Airborne<br />
Assault<br />
Vehicle<br />
(Fl 551)<br />
Table IO, Equipment Capability Table for Armored ',.<br />
Cavalry Platoon (TOE 17-107H)<br />
t!oJjs of Issue Capability Reference<br />
Z-$cout Section Determine the direction ST-24-18-1<br />
S-Rifle Squad to a specific radio<br />
transmitting in the<br />
frequency range 30.0<br />
to 75.95 MHz.<br />
d-Light Armor Negotiate almost any ST-17-l-l;<br />
Section terrain at speeds ST-17-15-l;<br />
from 4 miles per hour FM-17-36<br />
in water to 43 mph on<br />
roads, including 7foot<br />
spans, 33-inch<br />
vertical obstacles<br />
and 60% grades.<br />
291
. .<br />
It Kas not possible to dcscriI)rl !?& fire-poxer capabilities of a<br />
squad that was equipped with3 ccrt:ljIJ fyi+e of machine gun, for example,<br />
without knoliing what ty-pe of machine ;:kr :t wls. One machine gun might bc<br />
cap3ble of 3 sustained firing of 40 T‘~P.vI~:% per minute, while another<br />
might fire 100 rounds per minute. ‘!)I*: t.~r.zc of thd first machine gun<br />
might be 6,000 meters, while the r;ure:l’ r,l tic second, faster machine gun<br />
might be 3,000 meters. Clearly, 3ny CifJ+f 1 native statement of the capabilities<br />
‘of th3t squ3d, 3nd hcncc t t/c : +rbjbiIitics of the w!~olc platoon, is<br />
greatly affcctcd by rihich of the tr*‘tr filLI;!I.;nc guns is being used. Simiiar constraints<br />
upon unit capability St:ltcvwli(; zst-c imposed by the kind of trsnsportation<br />
th3t is rtv3il3ble. This t :/::t’ c.f inforn3tion !i3s obvious implications<br />
3s regards :hc capabilities of’ .J rf:;t, 3nd :nust be incorporated or<br />
dclineatcd in unit cap3bili ty statcnc:/l~.;,<br />
PRELIblINARY WORK ON r:VA!l:r;.T:ON DEVICES FOR ATTS<br />
As stated ccirlicr, one of the rx’~!‘~#t’cmcnts for the individual cnlistcd<br />
modules is th3t they bc meaningful in t+ r:\,s<br />
as tie11 3s individual evaluation pro~‘c~~~tcs.<br />
of unit evaluation procedures<br />
Ever, cnution must be<br />
cxcrcised in the using or modifying ci:~t/ ::~A~lcs at this stngc. The task list<br />
for a given ZIOS may or may not adcc(i~:ltcj, r:tmple the tasks pcrformcd by<br />
jncumbcnts. l’hC task 1’ 1st 31~0 may (lr 1~) not include 311 the tasks which<br />
make up the duty modules for that W)!i, ?,,.:I /fin311y, dutv nssignments nsy csist<br />
rihich arc in3ppropri3tc to 3 duty poc/tjr,!r Jcsignation within nn YOS.<br />
9<br />
i<br />
APPROACH TO THE PROBLEM ' 1<br />
The first :lppronCh Was t0 dCVc*lolJ F ~:a.mplCS of criterion behavior for<br />
a particular duty module, but it cl id IN! !:ckm to hc \t*J, /.(n exampic of ovcrnll unit pcrformnnce<br />
rating procrdurcs is show in Figure I’,<br />
. . i f:<br />
, ?3E<br />
I<br />
-.. . ..-. --. __.<br />
I
. .<br />
Checklist Item<br />
1. Replacements properly<br />
received and assigned<br />
2. Losses and casualties<br />
properly processed<br />
3. Leadership of platoon<br />
and squad SCOS<br />
4 -* Duties of subordinates<br />
properly allocated<br />
ATT Phases<br />
Figure 6. An Approach to Unit Evaluation Devices<br />
Based upon Task Statements.<br />
Activity Rated: Rifle Platoon<br />
Phase 1--Daylight Attack<br />
Proper actions:and preparations in<br />
assembly area? /’<br />
Proper organization, formation, and<br />
dispersal?<br />
Platoon’s use of cover and concealment?<br />
Firing on objective--good volume, 1~11<br />
directed? (Scored only for live firing’<br />
score Comments<br />
- -<br />
Figure 7. Evaluation in Terms of Unit Performance As a Whole.<br />
t<br />
/:<br />
-<br />
.-.:
RECOMMENDED APPROACHES AND Pl.ANS !'I<br />
APPROACH .y<br />
I I<br />
Test; have usually been designed by taking a p+rticulor criterion situation<br />
and designing a test for that one situation irrespective of how it is<br />
structured for that particular case at that particular point in time. If,<br />
for cxainple, a proficiencg knowledge test is being designed for somebody<br />
who repairs automobiles, you determine exactly Aich tasks pooplc in that<br />
NOS are supposed to perform, then you design the individual proficiency tests<br />
to measure knoulcdge of those specific tasks.<br />
The same is true of unit trninin~ tests. You look nt the capability<br />
statements, study the terrain in \
The Army has already embarked upon a large-scale program to overhaul all<br />
of its unit training tests. The Army recently designed a systems engineering<br />
of unit trsining programs that is p3tterned after its systems enginrzring of<br />
training procedures for individual training. Several organizations are<br />
currently harking on the redesign of unit training programs in terms of the<br />
Army’s systems engineering of unit training programs. Thus, 3 nuder of new<br />
evaluation devices will definitely be’ designed. These new approaches to<br />
modular unit training tests have been discussed with those uho arc responsible<br />
for this type of work, and it is bclicvcd that they will consider a modular<br />
approach to AITs 3s an alternative when the tests are revised.<br />
USES OF DUTY-MODULES<br />
This paper has stressed the economy of duty modules in terms of test<br />
preparation costs and the importance of consistency in language. Other<br />
possible 3dvantages are given.<br />
(1) Duty modules can improve occupational research, its<br />
description, and utilization.<br />
( 2: They have the potential to reduce training time and<br />
lower tr3ining costs.<br />
(5) They can provide a better use of individuals in<br />
assignment substitutions.<br />
.<br />
(4) They can simplify automated assignment and control<br />
procedures.<br />
(S) They c3n improve proficiency evaluation.<br />
(b) They can improve c3rcer guidance and planning.<br />
(7) They c3n improve utilizstion of personnel at a<br />
10~31 level.<br />
(S) They can improve unit training evaluations.<br />
PRESENT IMPLEMENTATION OF PROCEDURES<br />
It xould take many years before this kind of modular system could be<br />
established. If everyone agreed that it should be done tcmorrow, it Kould<br />
still take several years before such a system could hccome operational.<br />
Sevcrthelcss, the advantages of a modular npproach to test construction are<br />
so great that people should start thinking about it now. The problem arises<br />
because test developers cannot proceed by thcmselvcs. The whole occupstional<br />
structure needs to bc revised so that occupational specialties would be<br />
defined in terms of duty modules.<br />
295<br />
2
‘.<br />
Nevertheless, there arc several procedures that could be started now.<br />
For example, change the way in uhich’test design work groups are established.<br />
Instead of assigning test development to a group of experts in the same<br />
specialty, create 3 work group comprising representatives from several<br />
specialties, and ask that group to design a single test component that would<br />
be useful to thea all. A similar approach (i.e,, assigning groups of experts<br />
from different types of units) can be used with those who design unit evaluotion<br />
devices. These preliminary steps, taken now, would have many immediate<br />
advantages, and hould greatly facilitate a conversion to modular testing in<br />
the future.<br />
/j<br />
I<br />
296<br />
,<br />
‘.: -;
THE GROWING DEhIAND FOR HUMAN PERFO+IANCE TESTING<br />
-- J. E. GERBER, JR. -j<br />
ABSTRACT<br />
This paper reports the human factors involved in moving United States<br />
A&y Infantry School instructors away from norm-referenced, know-<br />
ledge sampling testing and into cri: erion-referenced performance<br />
testing. It sketches the gradual convergence of and increased com-<br />
munication between subject matter esperts (instructors) and test<br />
psychologists (quality controllers) from’group orientation through<br />
stylized esample to specific problems of performance testing, and<br />
I<br />
compares the dynamics with those obsefved by educators in civilian<br />
schools. m 1 i<br />
INTRODUCTION<br />
Parallel innovations have been occurrir!g and recurring in different<br />
I<br />
types of schools in separated parts of the country.<br />
In April, 1973, Doll, Love and Levin i.i) reported as foollows on their<br />
esperiences in implementin,n a new instructional model in Louisville<br />
clemcntary and high schools. /<br />
“In Loui5xille, Kentucky, a brave new world of educational<br />
innovntivn was recently attempted. Bright ideas and good<br />
.’<br />
I<br />
297<br />
_.
JJ<br />
intentions, a situation ripe for change, and a golly-gee,<br />
whiz-bang attitude at many levels of decision making<br />
augured well for its success. Education was to roll<br />
for&ard. History was to be made. But somewhere along<br />
‘the way the wheel had to be rediscorered and history not<br />
only was made, it was remade. It<br />
Two years ago, Kramer and Kneisel (2) showed you a model for<br />
course design as it was then being developed at the US Army Infantry<br />
School, and related to you a dynamic plan for imposing (2a) this model<br />
onto on-going courses of instruction. Their model is depicted on this<br />
slide. 9<br />
(SLIDE 1 OK)<br />
SLIDE I, SYSTEMS ENGINEERING PROCESS<br />
The two-headed arrow labeled quality control in the middle of the slide<br />
implies that quality control takes place at every step of the systems<br />
engineering process. Our concentration today will be on criterion.<br />
testing.<br />
(SLIDE OFF)<br />
,<br />
i<br />
I<br />
298<br />
_ -<br />
.. . . c<br />
.<br />
_-.
Thus, we at Infantry School had a model and an implementation plan<br />
for training adults to perform specific jobs.<br />
Nean\vhile. Getz and others (3). who were changing froi,> a tradition-<br />
ally based to competency based programmed instructional model for<br />
teacher education at Illinois State University, provided the following<br />
admonition in eal.ly 1973:<br />
“\Vhile the rationJe for any program can be developed<br />
logically, the actual implementation invariably creates<br />
many problems that cannot be anticipated. . . The results<br />
are rewarding. but adjustment by staff and students is<br />
slower and more agonizing than most would suspect. ”<br />
IW R POSF2<br />
The purpose of this paper is to compare human esperiences at the<br />
Infantry School with those at other instituiions in changing instruc-<br />
tione models; to focus on the impact of changing from norm-referenced<br />
tests of know!-dgc to criterion-referenced tests of abilities; to discuss<br />
some of the dynamics implied; and draw some Inferences.<br />
Implementation of the systems engineering model at the Infantry School<br />
has been a tremendous undertaking, both from the technical and the<br />
human side.<br />
299<br />
,
,, -<br />
Systems engineering documentation for the first of some 22 active<br />
courses of instruction began in June 1968. We expecd to have the<br />
final one completed in June 1974.<br />
Ct-ltcrion testing was continued in those ;lr~as where it altyndy es-<br />
isted, that is:<br />
Pure performance trsting, in whicn students ;xrformcd job tasks<br />
against pass/fail (-t’iteria, continued in tcrts of physical fitness,<br />
n-capons firing qualification, parachute jump qualiiication. and land<br />
navigation.<br />
Yet-iormance testing under conditions siynulating job conditions<br />
and against subjt\ctb:ely t-a:ed criteria 1va.s darried on in tactical map<br />
problems.<br />
n<br />
r<br />
Bu;, to a large degree. students \~cre g:‘adJatcd f r o m our cout*ses on<br />
their ability to answer multiple c*!wice required recogni-<br />
tion, t-wall. or comprehension of principles, and<br />
which were graded “on the curve. ”<br />
On 2i February lBi3. v,xz determined to make all tests pct*formance -<br />
arriented so that all would have the student +ply job-required skills,<br />
300<br />
. ’ s.<br />
,<br />
?
I<br />
. -<br />
knowledges, and attitudes to an identified job task. \L’hile the rever-<br />
berations of this decision were felt throughout the School, they were<br />
experienced most acutely by the instructor \vho retled most heavily<br />
on multiple c.hoice items testing r-ecognitlon or recall in his arc3 of<br />
expertise. In olwrr to relate subsequent events, observe the stylized<br />
functional diapganl of the School or � h -,lanizational activities on this<br />
slide:<br />
(SLIDE 2 ON)<br />
UX
\ _ . ‘\ . ,<br />
The test psychologists or quality controllers are the principal guard-<br />
ians of the instructional model presented earlier. Obviously, they,<br />
through their actions and judgments, are focal. points for convergence<br />
of differences in rationale, procedure, and requirements.<br />
The instructors, of whom there are approximately 1. 050 in the School,<br />
have multifaceted roles under the systems engineering model. Lectur-<br />
ing a class of students, as he might have done daily in earlier times. is<br />
only one of the instructor’s time consuming activities today. He works<br />
closely \vith systems engineers in documenting his area of expertise.<br />
He prepares his lesson plans with all that this entails. IIe works with<br />
quality controllers in his design and construction of tests and exnmina-<br />
tions. In fact, his role is not unlike that of the Illinois State<br />
faculty member described by Getz and others (3a) and shown<br />
next slide.<br />
(SLIDE 2 OFF)<br />
(SLIDE 3 ON)<br />
SLIDE 3 TftE FOUR AREAS OF RESPONSIBILITY.. .<br />
Un?ve’rsity<br />
on the ’<br />
In addition to those marked on the slide, the USAIS instructor has other<br />
instructional duties which are physically and mentally demanding such<br />
. ./ ..<br />
/<br />
302<br />
.
_<br />
as field problems and weapons firi~t;. The instructor iaces massive<br />
prohlems of priority.<br />
(SLIDE 3 OFF)<br />
The multiple demands upon the ins!ructot .‘s time and attention account<br />
in pa1.t for the spectrum of response to the requirement for perform-<br />
ance testing. 1 en?phasize the word spectrum, for a spectral responsr<br />
did occur, varying from, “Great, I’m dc,ing real world performance<br />
testing already, ” to “NO way. ” !Ve applaud the former and the many<br />
who flocked to it in short order, but it was those on the “No way” end<br />
of the scale who required the most attention.<br />
.<br />
This paper deals with dynamics of tmhange. If a given reaction or<br />
I<br />
attitude was encountered at 41, it is discusse,d here for its dynamics,<br />
I<br />
1<br />
not for its frequency.<br />
Quality controllers conducted a set ies of fre -wheeling meetings with<br />
groups of 10 to 20 instructors at a time to<br />
f<br />
SL irface<br />
problems and to<br />
establish the kind of dialogue and openness,which Doll and his Louis-<br />
ville associates (la) belie\-e essential.<br />
i<br />
if<br />
i<br />
The problems surfaced bore a -’<br />
marked resemblance in kind if not in content to. those reported by Gross<br />
303<br />
. . . : /<br />
: ;.
and his Harvard colleagues (4) in their attempt to implement a modu-<br />
lar change in a small elementary school. These were:<br />
1) Staff resistance;<br />
2) Alisconceptions of the model and roles within it;<br />
3j ‘Lack of expertise;<br />
-1) Lack of materials anti 1.csources;<br />
5) Incompatibility with the model.<br />
It became apparent to us that not only were there some problems of<br />
implementation. many of them were problems that had been raised<br />
elsewhere some years before as will be related in part below.<br />
We determineri that, ha*.-ing uncovered our own choke points or bttle-<br />
necks, we should press on while hceding the following speculation of<br />
Doll and his associates (lb):<br />
,, . . . the wheel does have to be eternally re-i:lventcd.. . ;<br />
- -<br />
unless you go through every agonizing step yourself, what -<br />
ever you accomplish ultimately will be superiicial and<br />
unimportant.. .Each of us must discover for ourselves<br />
where it hurts the most, but the experience of others may<br />
be helpful in leaking how to cure the wound. ”<br />
304
In this endeavor, we determined to work toward changing old concepts<br />
and practices into new ones using a proactive-retroactive transitional<br />
rationale described by the present author (5) in 1964. On,the way we<br />
noted that Popham 16), at least as early as 1968, warned the educa-<br />
tional cotnmunit~ i!lat many arguments against “stating instruction31<br />
goals in terms of measurable learner brhaviors” were being voiced.<br />
Popham felt that all these arguments KCI’C invalid, SO, in order to<br />
refute them, he c-odified these arguments into eleven reasons why a<br />
behavioral model \vas or is said to be improper. Innovators at the<br />
Infant t-y S~*hool, at Illinois State University, (3) and Louisville Public<br />
Schools (1) encountered several of these “reasons” when changing from<br />
one instructional or testing model to another. Therefore, it set-ms<br />
appropriate to compare experiences and management techniques at<br />
these three schtwls, using the negative arguments as focal points.<br />
Perhaps others planning to roll toward pedagogical innovations may<br />
profit by our rt-invention of the wheel.<br />
Popham’s negative reason number 1 is shown on this slide.<br />
(SLIDE 4 OS)<br />
SLIDE 4 ~EG\TIVE KEASON 1 TRIVIAL LEARSER EEIIAVIORS<br />
(SLIDE (1 E’F)<br />
,$<br />
*.<br />
305<br />
-I_<br />
:<br />
‘;...<br />
-<br />
.._<br />
. . __ -. ,., .‘;<br />
‘. ‘1<br />
-.,.<br />
.’<br />
-*- ’ .<br />
,”<br />
*. .<br />
“I..--.- _. .<br />
-- .,,<br />
: : ! I? . . . ,.<br />
: . \<br />
I<br />
‘*<br />
; :<br />
.<br />
j :<br />
Q~$;.”
i I<br />
/<br />
A comparison of experiences and management at the! three schools<br />
which I have related to Reason 1 is shown on this slide:<br />
(SLIDE 5 OX)<br />
.<br />
r<br />
I”<br />
j -<br />
SLIDE 5 REASON 1: “TRIVIA”<br />
We see that US.4IS and ISlj (3b) encountered the problem, L*vhereas<br />
Louislrille (lc) apparently had a more positive esperience even tllough<br />
some of the sample objectives presented seem to be taxonomically low.<br />
In our experience, operationalizing at the lowest level was easy. It<br />
took the form of, “At the conclusion of this’ instruction, you will be<br />
able to list, state, compare, etc. ” The stydent needed only sufficient<br />
mental capacity to recognize, recall or understand instruction in order<br />
/<br />
to pass tests.<br />
(SLIDE OFF)<br />
b<br />
I<br />
The mental capacities and the elicited behaviors are at the Louver end<br />
I<br />
of Bloom’s (i) taxonomy of educational obj,ectives illustrated as a<br />
stairstep or hierarchy by this slide:<br />
(SLIDE 6 ON)<br />
/<br />
li<br />
SLIDE 6 HIERARCHY OF MENTAL REQIJIl’\EhlEXTS.. .<br />
-..‘- _ . -.<br />
._ _.<br />
306<br />
I
This slide, which is a modification of Towne’s (8) illustration oi<br />
Bloom’s tasonomy, cotnpares mental skills with job performance.<br />
Our difficulty arose in trying to raise the taxonomic level from the<br />
first two levels--knowledge and comprehension- -to the appticatory<br />
level and higher. In the case of knowledge, we want to say. “Given<br />
a situation such as you at-e likely to encounter in your future job, you<br />
will be able io apply your newly acquired knowledge well enough to<br />
analyze your situation, synthesize all pertinent factors, make an<br />
evaluative judgment, and behave nppropri ately. ”<br />
I\-e found solidly imbedded defense of subject matters as ha\-ing intrin-<br />
sic value--knowledge for knowledge sake--regardless of the j
.f . 8<br />
.,d *. I .*.<br />
\<br />
. ‘,’<br />
*.,<br />
\ I’ .<br />
answer had been generated more r\tYectively than cognitively, the<br />
I<br />
technique succeeded where logical :\I *stunlent a and kogent reasoning<br />
I<br />
had failed. Some instructors were nwre concerned with the minutia<br />
of goal achievement than xvith the :wllie\Vement itself. It was more<br />
imp&ant, for example. for the student to know the fact that a hole<br />
must be dug to a certain depth than it was to specify acceptabie<br />
behavior for determining hole dep’.I\. This attitude was amenable<br />
to suggestion that a slight change of teaching strategy would present<br />
lwth the doctrine and thr terminal Iw!1avior.<br />
I+qlxum’s reason 2 (Gb) is sho\vn On this slide:<br />
(SLIDE i OX\‘) i<br />
I<br />
I<br />
SLIDE ‘i REASOS 2: PRESPEC,tW-$TIOS OF EXPLICIT GC\:lLS.. .<br />
I<br />
(SLIDE OFF)<br />
The con-2arison of the three schcx:ls i 1<br />
(SLIDE 8 ON)<br />
I<br />
shotvn on this slide.<br />
SLIDE 8 REXSOS 2: “USEXPG si TED OI’R~RTGNITIES”<br />
We see that esperi&crs among the three schoois differ. The differ-<br />
ences may stem from the different frames of references, i. e., adult,
face-to-face instruction at LiSAIS; adult self-instruction at ISU (3a);<br />
and elementary child face-to-face instruction at Louisville (Id). In<br />
our espericnce, serendipity in the classroom is rather tightly con-<br />
trolled bu;h rtdministrntivcly and by the students themselves. There<br />
is a grooving demand by students ;o !w informed of the precise goals<br />
of i mpendiug instrurbi i~)n, aided t)F :dministrati\‘C demand for precise<br />
statement of both immcdinte and ultimate goala. Novel, spontaneous<br />
student rwponses in proach. Tests, ob\*lctusly, must test the trniningAob-<br />
jectives. not the “old \var story” whic*h might tv used to illustrate<br />
solutiotl ?a ., job task dilemma.<br />
t<br />
(SLIDE C i’E.1<br />
Popham’s w.:son 3 (b’c) is show-n O:I this slide.<br />
(SLIDE 9 C,N)<br />
I<br />
For the prcscnt purpok, I shalt compare and discuss only changes<br />
in the proG?ssionaI stnff.<br />
:<br />
309<br />
-<br />
‘><br />
>
(SLIDE OFF)<br />
‘f . .‘.J.<br />
Gomparisons of staff attitudes and behaviors are shoed on this slide.<br />
(SLIDE 10 OX)<br />
SLIDE 10 REASON 3: “ATTITUDE CIIAXGE”<br />
The reaction to proposed change was spectral at all three schools.<br />
so we may concentrate discussion on the resistance or opposition end<br />
of the scale.<br />
Some members of the USAIS professional staff espressed themsell-cs<br />
as follows: “\t’e have gotten along \vitl:out job knowledge performanct-<br />
.<br />
testing for nearly 200 years and we have won many wars duringthat<br />
time; \vhy, all of a sudden, do we have to change L!o\v?”<br />
Those instructional staff members who held this or related views<br />
presented the greatest challenge because%. unlike the Louisville plan<br />
(le), our implementation plan provided no immediate escape for those<br />
opposed to change. Aloreovcr, neither \ve nor the Louisville schools<br />
(la) had the option of closing down while we sorted ourselves out.<br />
Some viewed systems engineering and resultant 02quiremcnts for<br />
performance testing as cllnnge for sake of change. others as schemes<br />
,’<br />
310<br />
:* .<br />
.<br />
-<br />
. ‘. /- / * . .<br />
I _ ‘.l-
:<br />
.- . . .-.\.<br />
.- :. \y ,:. . . .- I. .<br />
dreamed up independently by qtlality controllers who, as a result,<br />
sustained a certain amount of personal denigratioi to \\hich Doll (If)<br />
also refers. Some viewed the changeo\-cr as diitct attacks on their<br />
areas of espertisc. pedagogical approaches, or organizational divi -<br />
‘.<br />
!:<br />
sion ‘of labor. Some saw the change as a plan worthy of resistance<br />
or subversion. as f.he Louisville administrators foresaw !Ig).<br />
Even so, these members of the staff responded positively to three<br />
kinds of suggestive challcn~e in the military setting:<br />
First--Look at the kind of testing \-MI are doing nvw. If your<br />
lifs depended upon thcl ability of one of Four students to perform in<br />
your area of cspertise, would xou say he/ \~as qualified on the basis<br />
I<br />
of the test you give now?<br />
Y<br />
‘Second--If you cannot show how youl, J subject applies to the gr,x!u-<br />
ate’s job, I\!-;. not delete yo’lr subject from that course?<br />
Thit-d--If sppl;calion of the facts ycu I<br />
teach is taught by someone<br />
I<br />
else, why not stop trying to tes: rnplicyion noiv? --\Vhy not get<br />
together with the other instruction and il.-. later?<br />
(SLIDE OFF)<br />
/<br />
Popham’s reason 4 (6~) is shown on this slide.<br />
311
! _..“. ,<br />
(Sl,IDE 11 ON)<br />
’ _’ *./ ‘, . -. . . , \<br />
SI.IDE II REASON J: “MEASURJIBILITY IMPLIES BEH.4VIOR.. . ”<br />
(%lDE’OFF)<br />
t’ljnlparison of the three schools is shown on this slide.<br />
(SLIDE 12 ox;)<br />
SIJDE 12 REASON 4: “DEIIUXI.4NIZING”<br />
.i<br />
‘l’hc ISU esperience (3b) makes the case for criterion testing of overt<br />
Iwhnvior at the teacher college lcvcl.<br />
‘1‘Iw Louisv.ilte experience (lh) makes a case for contingency planning<br />
unticr a whole person interaction concept.<br />
..\s previ&s remsrks on Infantry School experience with Reasons 1<br />
through 3 imply. we did not run head-on into cries of reductionism,<br />
rwbotism, or similar charges of dehumanizing either Instructors or<br />
students. In fact, part of our underlying appeal for performance<br />
oriented testing has been on grounds of human interdependence. We<br />
;rskcd for and r;ceived imaginative replies to this generalized ques-<br />
I<br />
lion: “\~hat is the/minimum behavior you will accept now from the<br />
312<br />
\<br />
.+’
student that will convince you that he can perform in your area of<br />
expertise? ” The replies were translated into criterion statements<br />
of desired behavior and hence into performance tests.<br />
In the so calied “soft skills” areas --the leadership and attitude areas<br />
addressed by the present author in 19;2 (9)--subject area specialists<br />
working to the point of exhaustion with test psychologists progressed<br />
from a condition of no validating tests, through multiple choice know-<br />
ledge regurgitation tests, to case study type situational judgment tests<br />
of application of principles and knowledge. In addition, they have<br />
developed leadership peer rating scales on the premise that leadership<br />
qualities depend, at least in some degree, upon the perceptions of<br />
those who are to be led. In addition, they have developed a judgme’;ltal<br />
rating chart for race relations instructors to use in judging fitness<br />
of students to graduate as group facilitators. Significantly, here,<br />
according ‘to Rogers (lo), one of the appropriate behaviors is keeping<br />
quiet, remaining still, and allowing contentious group members to re-<br />
late to the group.<br />
Experiences at the three schools seem to point to the desirability of<br />
stating performance objectives in advance and testing them by some<br />
/<br />
*<br />
313<br />
-<br />
‘.: ,_
means other than the objective type. norm-referynced paper and<br />
I<br />
pent il test.<br />
1<br />
(SLIDE OFF)<br />
Popham’s reason 5 (6d) is shown on this slide.<br />
(SIJDE 13 ON)<br />
SLIDE 13 REASON 5: “IT IS SO’IEHOW UNDEMOCRATIC.. . ”<br />
(LC’JDE OFF)<br />
Comparison of the three schools is on this slide.<br />
(SLIDE 14 ON)<br />
SLIDE 1-l . REASOS 5: “UXDE110CRA$IC”<br />
I .<br />
�<br />
This charge does not apbear to he a pro&em at any of the three<br />
schools (I, ‘,<br />
I<br />
Our students are encouraged to demand t know in advance the objec-<br />
tive of an impending lesson or exercise. / .\lost of the studerlts are in<br />
I<br />
courses of instruction by choice. planning thetr futures around thetr<br />
to-be-acquired ability to perform in a iven military job field upon<br />
/ f<br />
graduation. They tell us, sometimes vociferously, when an examina-<br />
tion fails to meet their expectations to allo\v them to demonstrate their<br />
314<br />
�
ability to perform future job tasks. True, there are some who com-<br />
plain about having to make decisions on examinations, but not m&y.<br />
(SLIDE OFF)<br />
Popham’s reason 6 (6d) is shown on this slide.<br />
(SLIDE 15 OX:)<br />
SLIDE 15 REASOS 6: “TII.AT ISN’T RE=\LL\- TfIE \V.AY ‘I’EACIIING<br />
3<br />
r3.. . ,t<br />
(SLIDE OFF)<br />
The comparison is on this slldc.<br />
(SLIDE 16 OX)<br />
SLIDE 16 REASON 6: “REALIShI”<br />
Discussion of our experien
Relatively few, including those of the United States <strong>Military</strong> Academy,<br />
have been trained and educated to perform a specific job. Now,<br />
suddenly, these graduates, who are now faculty members, are re-<br />
quired to write and teach toward behavioral objectives and to devise<br />
tests *requiring appropriate student behavior at a given criterion level<br />
under. controlled conditions.<br />
They were not taught like that.<br />
Teaching was not like that.<br />
Teaching is now like that.<br />
And, as we are seeing, the change from one frame of reference to<br />
the other spawns collisions.<br />
We have managed the overall change through successive approxima-<br />
tion. Quality controllers at first urged. helped, required instructors<br />
to USC action verbs in their teaching or training objectives, and later<br />
required more and more specificity of the conditions and standards<br />
of performance to be elicited from the student.<br />
Quality controllers are continuing to help instructors develop tests<br />
of identified job tasks which are as closely related as possible to<br />
job actions, conditions and standards. While some of the tests<br />
.<br />
316<br />
L .<br />
.
emain in the multiple c!loice mode, in which the st’udem selects one<br />
of the stated actions as his solution, we are pulling qway from these<br />
i<br />
as we develop the capability, resources, strategies,, and instruments<br />
to observe and grade outdoor and indoor situational performance.<br />
(SLIDE OFF)<br />
Popham’s reason 7 (Ge) is on this slide.<br />
(SLIDE 17 ON)<br />
SLIDE 17 REASON 7: “IN CERT.qIN SUBJECT AREAS.. . ”<br />
(SLIDE OFF)<br />
The comparison is shown on this slide.<br />
(SLIDE 18 bN)<br />
SLIDE 18 REASON 7: “~U;\IANITIES” /<br />
Obviously, all three schools encountered the problem (Id, 3b).<br />
In our early experience,<br />
:!<br />
1<br />
ing and performance testing were fine for/everyone else except him,<br />
because of the nature of his subject area.<br />
1<br />
an instructor wou d feel that systems engineer-<br />
A few went to great lengths<br />
to “prove” that performance testing thei b areas was impossible. Their-<br />
cases were strengthened, spuriously, by early inability of test<br />
. .-<br />
317
psychologists who were unfam.iliar with specific doctrine and subject<br />
areas to come up with instant test items, schemes or scenarios.<br />
1Ve managed this by continuin g to iterate performance testing as the<br />
goal under a stance we termed ”urgent evolution, not instant revolt -<br />
tion”; by frequent, open, and sometimes protracted test item develop-<br />
ment sessions between instructors and test specialists. These took<br />
on a Socratic aura in that the testers continually asked questions of a<br />
“could you do this” type until the instructor himself eventually con-<br />
structed the test. N-e, in the words of Doll and his associates (ii):<br />
“held firmly to the notions that people support what<br />
they help create, that people affected by change must be<br />
allowed acti\*e participation and sense of ownership in the<br />
planning and the conduct of the change. ”<br />
(SLIDE OFF)<br />
WC believe that the approach was successful because both the test<br />
t<br />
psychologist and tho instructor gave freely of themselves in order<br />
that the instructor might succeed. If this sounds humanistic, it is<br />
so intended, for it fits rather well with Buehler and Allen’s (1 1) dis-<br />
cussion of Bugental!s humanistic ethic:<br />
I<br />
!<br />
I<br />
318
“Integral to his scheme is his emphasis that the humanistic..<br />
educator -person hopes that via his interventions and inter-<br />
actions he himself and the individual with whom he is inter-<br />
acting will emerge from that experience as societal change<br />
agents themselves. ”<br />
This Socratic: tedium is stil!. going on, but, as Bugental would have it,<br />
the conferences are more and more frequently now between instructor<br />
and instructor in the questioner and responder roles.<br />
At this point I should say that some of our best constructed paper-and-<br />
pencil tests are the military tactics planning and movement exercises<br />
laid on terrain maps. However, these tests have no precise sol,$ions.<br />
They are graded subjectively and serially by a team of 3 tactictans<br />
who use their own judgments as to how good a tactical move the student<br />
made and how :;vod the student’s justification of the move i:!. t’ailure<br />
rates are traditionally high even though norm-referenced grading has<br />
not been completely eliminated. Quantification of tactical judgments<br />
in order to determine why this is SO may indeed prove a difficult task.<br />
.. _..- *---I.-,--....,.---...-- ---. -_- -.._- _._.- ___.._,_.I._.,___ __. se'.. -.-<br />
.<br />
- . ..dbP‘. rri-..,-.rU.--LL--l,.,.~.U_l.<br />
, ;.<br />
-..r-. --* dr* _____ :;,,.&%uru;&&&-&,&<br />
1<br />
:<<br />
'\ _. *<br />
',. *.. ,'<br />
*a<br />
/- '.. _ _ yeeL.;- . i *' ' ' !<br />
' ..,' .a: \<br />
*\, a\ J) ..I'<br />
',*. '.,;":x.:\,<br />
: : ; \ 3; y.. ,;<br />
'.)<br />
,", .;,<br />
;. _ . 2. .:.<br />
,:. . . . .' , . :. "'-:<br />
-.a , . .\.I, . * .<br />
/<br />
319<br />
.
.<br />
Popham’s reason 8 (6e) is shown on this slide. !”<br />
(SLIDE 19 ON)<br />
SLIDE 19 REASON 8: “WHILE LOOSE GENERAL STATEXIENTS OF<br />
OBJECTIi:ES. . . ”<br />
(SLIDE OFF)<br />
The comparison is on this slide.<br />
(SLIDE 20 ON)<br />
SLIDE 20 RIMSON 8: “IKNOCUOUS OBJECTIVES”<br />
As skills in identifying and writing behavioral objectives are being<br />
developed, it is simply axiomatic that poof-, weak, or innocuoustiQbjec-<br />
. .<br />
tives will be included (Id, 3b). The prospect is allowed for in the all<br />
i<br />
pervasive quality control’provision of the bystems model used at all<br />
three schools (1, 3). Infantry School instructors, and probably others<br />
I<br />
as well, found that trying to construct per ormance tests on innocuous<br />
r<br />
or poorly constructed behavioral objectives was essentially impossible.<br />
This impasse also held true when constructing tests on objectives which<br />
did not relate to the job for which the St&lent was being trained.<br />
320<br />
lJ<br />
i<br />
/<br />
.r<br />
f<br />
\
Ii<br />
During conferences with an instructor who was having great difficulty<br />
applying his subject area content to the course graduate’s job, the<br />
test psychologist would help the iastructor define the problem but<br />
would never point the finger or try to dictate course or lesson content.<br />
As a-result, instructors sometimes drastically revised objcctir-es and<br />
instruction in oi-de;- to teach and test critical job skills.<br />
(SLIDE OFF) -<br />
Popham’s reason 9 (6f) is on this slide.<br />
(SLIDE “t OK’)<br />
SLIDE 21 REASOX 9: “MEASURABILITY IMPLIES ;\CC-
:<br />
)<br />
‘1‘. ‘.<br />
,---. .’ \<br />
:<br />
i .<br />
‘:t<br />
(<br />
,<br />
:-.- I<br />
P<br />
. _a<br />
-<br />
!<br />
, . . -. __<br />
.<br />
*<br />
. .__<br />
Illinois State (3a) reported a fragmented accountz&lity framework<br />
which is also similar to ours as iiiuslra:ed on px-:i-ious slides<br />
(Slides 2, 3). t<br />
our faculty members do not fear x-countabilit?- In fact, most demand<br />
it. Some argued tenaciously for the right to giw tests of knowledge<br />
and comprehension. saying, “I \\-st;i to ~I>O\V tl\;it the student knows<br />
what 1 taught him. If he fails the < 3i?rs.e later, it Will not be because<br />
I didn’t teach him my subject. ”<br />
This stance \vas understandable. \i’e managed it by sny;ng in effect.<br />
“you may use \vhatex-er instru&icrnzl s:ra:egy you desire. including<br />
i<br />
tests of fa&ual knowledge. But when thy! student is certified in.youl<br />
a.’<br />
area of espertise, he qust ‘be ce:.trfied on a performance test.<br />
I<br />
If<br />
you teach only facts.<br />
1<br />
and so~wone else teache s application of the facts,<br />
the test must be ori the application. He&Its of your in-cIass Informa-<br />
tion or practice tests can be @x-en 20 ih student’s faculty advisor for<br />
consideration if the student’s progress (pppears in doubt. ”<br />
We found one area which might ha\-e<br />
7<br />
more likely a liberal arts compassion for the stl;dents. The instructor<br />
I<br />
en feal- of accountability but was<br />
proposed an examinatior) of se\-eral problems wherein the student could<br />
322
omit one or more. This plan was abandoned when test specialists<br />
pointed out that it did not assure testin g ~,f all the training objectives<br />
in the instructor’s area of expertise.<br />
(SLIDE OFF)<br />
l~opham’s.reason 10 (Gg) is on this slide.<br />
(SLIDE 23 ON)<br />
SLIDE 23 REASON 10: “IT IS FAR 110iZF: DIFFICULT.. . ”<br />
(SLIDE OFF)<br />
The comparison is on this slide.<br />
(SLIDE 24 ON)<br />
.<br />
SLIDE 2-f REASOS 10: “GENERATE OI33SCTNES”<br />
Whereas the ISU self-instruction packages C3b) are said to contain all<br />
the compctencies considei-ed necessary fr\r teacher training, XC noted<br />
previously that not all the higher order cotupetencies were specified.<br />
At Louis\-ille (lh), workshops in systems language and in writing<br />
behavioral objectives were run initially but were not continued after<br />
the first year. 1nster.d. the school entered a plateau phase--a time<br />
I<br />
during which concepts, ideas, and motix-ntions hopefully are being<br />
internalized.<br />
’ t<br />
323<br />
� �<br />
_-.<br />
. .<br />
. .
In our experience. some instructors raised the questio:l as to who<br />
properly should write the training obje’ctives. Outside agencies?<br />
Systems engineers? Instructors? Some instructors believed very<br />
strongly that their job was to teach, and nothing else. Some felt that,<br />
because of their junior officer grades, they lvere singularly unqurili -<br />
lied to construct and conduct tests.<br />
We managed to gain their cooperation in helping to write objectives<br />
by stressing:<br />
1. iVeed to certify graduates;<br />
2. Graduate ca’pability to pei*form in the instructor’s area of<br />
expertise on tltc job;<br />
3. The log:icallJ inseparable relation of teaching and testing;<br />
4. Teaching and verifying or testin,‘5 as two aspects of the s;\me<br />
behavioral modification process;<br />
5. Qualification to teach carries v:ith it the obligation to test;<br />
6. Writing the action, conditions and standard of the training<br />
objective as tantamount to writing the test.<br />
IIere again, the approach was by successive approximations to the<br />
ideal product.<br />
324
:<br />
: . :<br />
(SLIDE OFF)<br />
.I<br />
Popham’s reason 11 (Gg) is on this slide. t<br />
(SLIDE 25 ON)<br />
SLIDE 25 RE.AX)S 11: “IN EVALUATISG TIiE !VORTH OF IKSTRUC-<br />
TIONAL SCIIEMES.. _ ”<br />
(SLIDE OFF)<br />
The comparison is on this slide.<br />
.<br />
I<br />
Unforeseen events puched all three kchools. Xcntioned earlicr~v~~c<br />
I<br />
I<br />
Louis\-illc’s (lh) underemphasis on objectives for classroom manage-<br />
merit and Illinois State’s (3~) facuIty\ and students’ slow, agonizing<br />
adjust merit to competenq r-based teaIher<br />
education.<br />
I<br />
In our csperience. the role of the 1 ittle<br />
10 minute quiz became the<br />
focus for philosophical c.lash. On the one h-??d were those who wished<br />
/<br />
to verify learr?ing of facts witho&application of those facts to the<br />
learner’s future j*>b. There were also those who viewed education<br />
I<br />
325<br />
I /<br />
b<br />
t:,<br />
‘4 p ’ . . .<br />
1.<br />
.<br />
_<br />
v, \‘ ._- ,.
t . ’ -.* .,<br />
.!<br />
as punishment and the quizzes as a whip, an outside motivator or<br />
disciplinary measure necessary to assure that students read their<br />
homework assignments. These proponents would penalize students,<br />
academically, for poor answers to pop quiz questions.<br />
The problem was overcome by e.‘ttensive revision of school testing<br />
policv and by separating thesr britf tests into t\x;o categories:<br />
*’ -<br />
1. Instructional techniques,. progress checks or feedback devices;<br />
2. Verifying techniques, or perfor.nance tests of ability to<br />
perform job tasks.<br />
(SLIDE OFF)<br />
Hopefully this paper has anslvcred, at least in part, the following qucs-<br />
tions about criterion-referenced measurement posed Lx? Dziuban and<br />
Vickcry (12) in February 1973:<br />
-’<br />
“HOW are teachers to make the trasition from the more<br />
traditional practices and what are the consequences?”<br />
“Can present instructional material be adapted to<br />
criterion-rieferenced measurement?”<br />
.! . :( ,/<br />
326<br />
‘><br />
Z
“Will a new system ultimately result in substantial<br />
additional demands upon teachers, many of whom<br />
are presently operating on overcrowded schedules?”<br />
CONCLUSIONS<br />
I believe that’we can draw the following general conclusion from the<br />
experience at the various schools as presented and discussed here:<br />
Experiences with radical change from one frame of reference to<br />
another have been about the same at liSAIS as they have been at other<br />
schools that \VC know about. In the words of Doll and associates at<br />
Louisville (lb). this conclusion 1s stated as follows:<br />
11 . . . problems which have arisen in bringing about<br />
.<br />
educational reforms. .exemplify and reinforce findings<br />
of previous research. . . ”<br />
CLOSE<br />
This paper is not intended to malign<br />
the words of Doll and his associates<br />
“It . . . is intended as a salute to<br />
L ’ .<br />
i<br />
/ :<br />
,‘.I ..’<br />
any body. Instead, and again I use<br />
(lj):<br />
people who tried boldly to<br />
make a significant impact on problems which bedevil every ‘.W<br />
large . . . school system. Their mistakes stand out clearly<br />
:<br />
I(<br />
_ -.__ /’<br />
-. - .<br />
i.<br />
:.<br />
.,’<br />
‘..<br />
--:
- -.. .<br />
bEcause they attempted more than anyone else rb date.<br />
Their successes may seem unduly minar becaqe they<br />
I<br />
attempted to conquer a whole mountain range yd now<br />
only control some of the peaks. But they have at least<br />
gutten past the foothills, atid from the peaks they can<br />
see more clearly than before. They can continue to<br />
climb. Hopefully, they can help those of us at base<br />
camp who may try to join them. ”<br />
*<br />
328<br />
I<br />
-2. .- -. , .<br />
I<br />
_‘.<br />
>
REFERENCE<br />
1. Doll, R~,~ssell C. ; Love, Barbara J. ; and Levine, Daniel U.<br />
“Systems Renewal in a Big-City School District: The l.essons<br />
of Louisville. ” Phi Delta Kappan, April, 1973, pp 524-534.<br />
la. Ibid, p 527.<br />
lb. Ibid, b 530.<br />
LC. Ibid, pp 526, i.<br />
Id. Ibid, pp 5Pi, 8.<br />
le. Ibid, pp 525.7.<br />
lf. Ibid. p 528.<br />
lg.<br />
ibid, p 525.<br />
lh. Ibid, pp 526-E.<br />
li. Ibid. p 529.<br />
Ij. Ibid, p 524.<br />
3-. Kramer, Bryce R.. and iineiscl. Richard S. “Systems Approach<br />
to Evaluation and Oynlity Control of Training. ” C;S Army Infantry<br />
School, Fort Bcnni:ig, GA 31905. Pr*e~cntctl to 1lilitat.y <strong>Testing</strong><br />
<strong>Association</strong> Conference. \\‘ashington, D. C., Scptcnlbcr, 1971.<br />
2a. Ibid, p 6.<br />
3. Getz, IIownrd; Kcnncdy. Larry; Pierce, ivaltcr; I’dwards, Cliff;<br />
and Chesebro. Pat. “From Traditional to Coml)ctc~!lcS-Rascd<br />
Tcache r Edwat ion. ” Phi Delta Kappan, January, l!K3. pp 300-302,<br />
3a. Ibid, p 302.<br />
i<br />
-. - __<br />
:<br />
.-<br />
_.-<br />
.-<br />
. Ji<br />
329<br />
3<br />
_..’ L<br />
.<br />
.<br />
. r<br />
__- -’<br />
. . _<br />
,
3b.<br />
3c.<br />
1.<br />
5.<br />
6.<br />
Ga.<br />
6b.<br />
(iC.<br />
6d.<br />
Ge.<br />
Gf.<br />
6&t-<br />
i.<br />
8.<br />
Ibid, p 301.<br />
Ibid, pp 301,2.<br />
Gross, iVeal; Giacquinta, Joseph R. ; and l3ernstein, Marilyn.<br />
“An Attempt to Implement a Major Educational Innovation: A<br />
Sociological Inquiry. ” Cambridge. Harvard University. Center<br />
for Research k Development on Educational Differences, 1968,<br />
Chap 6.<br />
Gerber, J. I:. Jr. “Proactive and Retroactive Effects in Programmed<br />
L,earning. ” In Ofiesh. G. D., and Xleierhenry, W. C.<br />
(ED), Trends in Programmed Instruction. Washington,- NEA,<br />
1964, pp 232-234.<br />
Popham, W. James. “Probing the Validity of Arguments Against<br />
Behavioral Goals. ” In Kibler. R. J., Barker, L. L., and Miles,<br />
D. T., Behavioral Objectives and Instruction. Boston, Allyn 6r<br />
Bacon, 1950, pp 115-124.<br />
Ibid, p 116.<br />
Ibid, >p lli.<br />
Ibid, p 118.<br />
Ibid, p 119.<br />
Ibid, p 120,<br />
Ibid, p 121.<br />
Ibid, p 122.<br />
Bloom, Benjamin S. Taxonomy of Educational Objectives, Handboo_k<br />
I; Cognitive Domain. New York, rllcKay, 1956.<br />
Towne, William B. Sr. “Differentiating Between Situation-Oriented<br />
Items and Setting-Oriented Items in 110s Proficiency <strong>Testing</strong>. ”<br />
5 Seymour Pi. , Xorth Augusta, S. C. 29811 (Unpublished manuscript,<br />
personal communication), October, 1972, p 3.<br />
330
9. Gerber, J. E. Jr. “Evaluation odleadership and Communicative<br />
Skills. ” In Haines. R. E. Jr., and Hunt, I. A. Jr. (ED). CONARC<br />
Soft Skills Training Conference. Fort Bliss, TX, US Continental<br />
Army Command, December, 1972, Vol IV, pp 28-34.<br />
10. Rogers, Carl R. Carl Rogers on Encountek Gro*. New York.<br />
Harper & ROW, 1970, pp 48, 66 et seq.<br />
11. .Buehler, Charlotte, and Allen. Xleianie. Introduction to llumanistic<br />
Psychology, Monterey, CA, Brooks/Cole. 1972, p 78.<br />
12. Dziuban, Charles D., and Vickery, Kenneth V. “Criter;lon-<br />
Referenced Measurement: Some Recent Developments.<br />
Educational Leadership, February, 1973, pp 483-486.
“V’<br />
SYSTEAfS ENGINEERING PROCESS<br />
___-_ e ..___<br />
W WIv<br />
-. _. ._. --__<br />
-_ _._-- ._ ____ _c_7<br />
CONDUCT Of TRAINING
,<br />
c3<br />
tx<br />
0<br />
e<br />
ur<br />
N<br />
=<br />
cn<br />
a0<br />
i<br />
I<br />
5= 4<br />
!2<br />
% 4<br />
- I<br />
041<br />
%<br />
0<br />
.<br />
;<br />
I<br />
I<br />
I<br />
4<br />
:<br />
0<br />
I<br />
\ \\\I
. . . -<br />
‘-..’ ‘.<br />
. .<br />
- - A<br />
,<br />
i h<br />
2 ,,_<br />
:.<br />
,.<br />
.*./.<br />
‘. j<br />
THE FOUR AREAS OF RESPONSIBILITY FOR EACH<br />
PROFESSIONAL SEQUENCE STAFF MEMBER \%<br />
i<br />
A rGB WllHlN 6 INOlVlOUAl IfAM C StOUfNCf GUI01<br />
0 COMMIlTtf<br />
1Hf SYSltM PARlICIPAlION R1VISION<br />
MtMBlR<br />
* ’ MAINlAIN LIBRARY<br />
IASKS ., * * Bull0 NtW * i AlTtND SUBltCl<br />
MAlfRlAlS * AlltNO ltAM PACKACtS O R<br />
MffllNCS<br />
. COqRDlNAlt IUNIOR<br />
PARTICIP’ ,:ON tXPtRltNCt * ;<br />
M!!?INGS<br />
CJRRtCl PAPERS<br />
RtVlSf 0 1 0<br />
PACKACtS<br />
* . PROViDi ftiDBAiK<br />
f O R CtNTRAl<br />
* . PRfPAt.t INSlRUClIONA~<br />
SiSSlON SCHtOUltS<br />
* 1 SUPtRVlSt M I C R O -<br />
itACHINQ!B<br />
- UPOATt M~SSAG~-$@ITI--<br />
�<br />
*<br />
: OfftR ftlOBACF<br />
10 SlUOfNlS<br />
1 PRGVlOt PiRSONAl<br />
AND ACADEMIC<br />
.-A 0 UC... _<br />
* I’ Bull0 NtW<br />
!iSlS O R iXf’-AND<br />
OLD 1iSlS --<br />
* I B, l1O NtW M[U,A<br />
0;; RtVlSt 0 1 0<br />
MEOIA<br />
* ?<br />
POllCY COMMllTtt<br />
VOlt ON ACCtPl<br />
A.;lCi Of NEW<br />
PACKACtS AND<br />
1tSlS I N AREAS<br />
O f lXPfRTlSt<br />
TV tOI1 PROf<br />
StOUtNCi<br />
LSSIONA!<br />
CUlUb<br />
* ;, INlllAlt A N 0<br />
~ONOUCl<br />
INSTRUCTIONAL<br />
1 ilC<br />
1<br />
-./<br />
flC<br />
--.<br />
--<br />
,-‘.<br />
-. . .<br />
* OVtRSft IHt SURVtll SiSSIONS<br />
* a;<br />
IANCi SYSltM<br />
OPtHAlt 1Ht ItSlING<br />
� r SiRV! AS lfA#<br />
CHAIRMAN ON A<br />
* APP[ltS ALSO IO USAIS INSlRUCfOR<br />
PROGRAM<br />
ROiAlING BASIJ<br />
* 9 COOROINAft CRfiJlf NU t ir v,<br />
CRfDll f’OR GRAOt C A R D S<br />
t:<br />
‘0 i IL _. _.<br />
Fi<br />
cc
*<br />
: .<br />
.‘.<br />
\. ,. .;’ ‘... .<br />
-<br />
.<br />
335<br />
c
,<br />
336<br />
,<br />
L<br />
0<br />
A<br />
u<br />
a<br />
><br />
-<br />
a<br />
.<br />
.<br />
u<br />
a*<br />
= .A -<br />
- --<br />
- CY<br />
.<br />
W<br />
3 a0<br />
0<br />
.<br />
I 0<br />
-rli<br />
rl!<br />
1<br />
)<br />
D<br />
-<br />
i3<br />
u<br />
e<br />
awn=<br />
-
IC’ -<br />
1<br />
I<br />
I .<br />
HIERARCHY OF MENTAL REQUSREMENTS<br />
FOR PERFORMANCE OF JOB TASKS,<br />
- .__<br />
COMPREHENSIOFJ<br />
EVALUATION<br />
HIGH
.<br />
S:,IDE 7
REASON 1 UNEXPECIEO OPPORTUNITIES"<br />
EXPEIIIENCE AND MANAGEMENT<br />
I111 i Ulll\bli! i<br />
i lpiki!r(: ik IN PASSING<br />
SIUCENTS DEMAND PRECISE<br />
C.OAl STATEMENTS<br />
AOULT STUDENTS OlSClPllNE<br />
THEMSLLVES AN0<br />
INSTRUCTORS wHWiJRA~\,<br />
fROM C,OAlS<br />
VAMAuivihl REOUIRE<br />
CRITERION TESTS FOR JOB<br />
PERfORMANCE<br />
!XPtHltNI,fll .YES<br />
COURSE WRITERS<br />
,INSTRVClORS RESTRICTlO<br />
BY PREClSElY SlATi<br />
OBJECllVES<br />
'+ANACIM1Nl CONTINUE 10<br />
REOUlRi FRECIS! WRITING<br />
ix?ircii hl io IN REVERSE<br />
OBJECTIVES CENTERED ON<br />
CROUP PROCESS AN0<br />
SlUOENl CENTERED 1EARN<br />
INI. NEtllECiEO STRUCTURE<br />
AN0 AOU!T GUIDANCE<br />
-. .I<br />
-‘_<br />
MAhACiMfhl fACUlTY -.<br />
Sflf MANACEMENT<br />
1---.-<br />
I -<br />
’ ,<br />
3‘ ‘\<br />
rx<br />
r<br />
3<br />
M 0:<br />
-.\
--<br />
‘ . -.<br />
..’ ._:._.<br />
. . _ . . !. .<br />
. ..- :-- . .-.\‘.\ \<br />
: \<br />
, 4 _ . 1
*.<br />
. t<br />
. .<br />
‘aj.<br />
i ‘..<br />
r<br />
i .,I. ;<br />
I .<br />
i<br />
i<br />
:<br />
‘r’<br />
I<br />
REASON 3. “AIIITUDE CHANGE.<br />
IISAIS<br />
i r Pt Hlf NCi YES<br />
CERTAIN PROFESSION Al<br />
STAFFERS FAVORED STATUS<br />
OUO<br />
MANACLMtNI CHALLENGE<br />
PROFESSlONAllSM<br />
EXPERIENCE AND MANAGE<br />
ISU<br />
-<br />
__---~~~txPtRltNCiO<br />
YES<br />
VARIOUS LEVELS, OF<br />
PROFESSIONAL STAFF<br />
ENTHUSIASM TO CHANGE<br />
I N IRADITIONAI. 1EACHER<br />
ROW<br />
MANACiMtNT CHAllENGE<br />
PROFESSIONALISM .<br />
ACTIVE INSERVICE<br />
PROGRAM FOR GRAOUAL<br />
C H A N G E O Y E R<br />
1<br />
I !xPtRltNLtO YES<br />
:l, SOME TEACHERS &<br />
PRINCIPALS OPPOSE0<br />
CHANGE TO “SYSTEMS”<br />
CONCEPT<br />
12) T E A C H E R<br />
PARAPROFESSlDNAl ROlE<br />
CONFLICTS.<br />
MANAGtMtNl TRANSFER<br />
OUi WITHOUT PREJUDICE<br />
‘. _I . .’<br />
i<br />
,. ..“-d ” d 6 . , - . ..I
342<br />
. ,I<br />
i /<br />
.-. - . _<br />
.<br />
3LIUC 11
RLASON 4 .'OEHUMANIZINC<br />
EXPERIENCE AND MANAGEMENT<br />
;IP!i;ljN i: YES<br />
- -_ _<br />
SCHOOLCIIURSES AN0 JOB!<br />
STRESS HUMAN INTFROtPihS-<br />
INCE- CONlRlVfO TlSTl ).-<br />
RESTRICT MlASURABlE<br />
STUOENT BEHAVIOR<br />
vi+: :iUIrr' DEVELOP TESTS Di<br />
APPllCATlON OF JUDCMENT IN<br />
JOB SITUATIONS<br />
REDUCE RELIANCL ON MULTlPlt<br />
CHOICE TESTS<br />
COMPETENCIES INFERRED BUT<br />
NOT MEASURED BY PAPER ANC<br />
PENCIL TESTS INCOMPETENT<br />
STUDENTS LET BY ON NORM<br />
~ REFERENCED TESTS<br />
~~H::,!M!NI TEST OVERT<br />
BEHAVIOR REDUIRINC SKI11 lh<br />
PlANNlNC IOR. ANALYZING<br />
lNlE,RPRETING. AND EVALUAT-<br />
ING SITUATIONS OR BEHAVIORS<br />
E~LIMINATE CONlRlVED<br />
MUlTlPlE CHOICE IESTS<br />
DCVElOP TRANSITIONAl NORM<br />
, ;O CRITERION TEST SlR.ATEClES<br />
: I P [ k‘ I ! N’! i r IN REVERSE<br />
WORKSHOPS ON NEW<br />
HUMANIZED BEHAVIORAl<br />
SYSTEM" 1EfT TEACHERS<br />
UNPREPARED FOR PUP11<br />
BEHAVIORAL PROBlfMS<br />
v:~cl,tu:rc~ SElf<br />
MANAGEMENT<br />
,<br />
. .<br />
.._ .<br />
-t
344<br />
U<br />
SL:IDE 13
REASON 5 "UNDEMOCRATIC<br />
EXPERIENCE AND IVlANAGElViENT<br />
STUOENTS DEMAND'10 KNOW<br />
IN ADVANCE THE INTERMEDIATE<br />
AND TERMINAL BEHAVIORS<br />
EXPECTED<br />
STUOENTS OEhkND ---\.,<br />
OPPORTUNITY 10 DEMONSTRATE<br />
PROFICIENCY<br />
vi+!::iv!ti: lEVIlOP TESTS IN<br />
A PERFORMANCE MODE<br />
OPERATE 0UAIITY CONIRO!<br />
MODEL 3llDE 1<br />
NOT A STUDENT COMPlAlNT<br />
ABOVT SELF STUDY PROGRAMS<br />
PARENTS. TEACHERS. PARA<br />
PROFESSIONALS. AND PUPllS<br />
AR[ GOVERNING BOARD<br />
U!N!i,i Vi R! AGREEMENT IN<br />
ADVANCE PARENTS AN0<br />
PUPllS ARE FINAl AUTHORITY<br />
ON NEED FOR PROIECT<br />
MODIFICATION<br />
I
3<br />
_ . _<br />
, : . _‘,<br />
. .<br />
346<br />
SLIDE 15
! -<br />
i.<br />
’ ‘.<br />
, .<br />
. .<br />
.a ,<br />
:<br />
.<br />
0<br />
e<br />
REASON G "REALISM'<br />
EXPERIENCE RND IVIRNAGE<br />
:1~tRltNI.tn us<br />
INSTRUCTORS EMUlATE THEIR<br />
COLLEGE PROFESSORS IN THE<br />
6NOWlEOCt ACOUISITION<br />
WORlD<br />
INSTRUCTCRS RElUCTANT TO<br />
ENTER THE WORlD OF IOB<br />
PERFORMANCE ICSTINC<br />
MANAl;t Mf N' SUCCESSIVE<br />
APPROXlMATlONS<br />
SUCCESSIVELY MORL PRECISE<br />
STATEMENTS OF JOB iEST<br />
ACTIONS CONOITIONS AND<br />
STANOARDS OF PERFORMANCE<br />
-<br />
i IViF’liN 1. NO<br />
USE Of Sflf PACtD<br />
iNSTRUCTiONA! PACKAGES<br />
IN TEACHfH tIru~A11uFc<br />
CONSTITUTES THE REAl<br />
WORlD fUR THE STUOENT<br />
IN TEACHER 3REPARATiON<br />
t’!iNAi;j HI tji CRAO'JATES<br />
EMULATE COLLEGE<br />
PROFESSORS BECOME<br />
DEVELOPERS Of SElF 'ACED<br />
'PROFESSIONAL SEQUENCE<br />
KUIOES<br />
'J?H?GiH!N~ TRANSFER<br />
WITHOUT PREJUDICE
. .<br />
348<br />
/<br />
I<br />
1
a<br />
0<br />
-<br />
W<br />
a<br />
c<br />
5 0a-07z-<br />
I<br />
0 II<br />
z -7-<br />
W<br />
d<br />
a<br />
_-<br />
z<br />
a<br />
W<br />
W<br />
W<br />
W<br />
0<br />
z<br />
u<br />
a<br />
0<br />
-<br />
0<br />
C<br />
0<br />
-<br />
W u<br />
W<br />
a<br />
a<br />
W<br />
L<br />
I<br />
z<br />
u<br />
VJ<br />
W<br />
c3<br />
z<br />
u<br />
z<br />
a c-<br />
z<br />
0<br />
v<br />
I:<br />
a<br />
0<br />
a<br />
0 -<br />
a<br />
-<br />
c1<br />
c/,<br />
z<br />
-<br />
349
REA SQW 80<br />
-0<br />
” WtlILE LOOSE GENERAL STATEMENTS OF OBJKTIVES MAY<br />
+-=N<br />
APPEAR WORTHWHliE TO AN OUTSt!IER, IF MOST<br />
~o
,c<br />
I .<br />
REASON 8 INNlrcUOUS OBJCCTlVt.<br />
EXPERIENCE AND MANAGEMENT<br />
- -_ _<br />
I;OAlS OF-S!JBlfCl MATTER<br />
EXPERTS WERE EXTRANEOUS TF<br />
NEEDS OF COURSE GRADUATES<br />
uivi!,iwi~: TIGHTEN SYSTEMS<br />
:NCINEERINC<br />
DEVELOP JOB RELATED TESTS<br />
irP!kltNl!L. PERHAPS<br />
HIC;HER OROfR COMPElfNCIEj<br />
N!!T SPECIFIEO<br />
~ih~i,iwitii ANALYZE AND<br />
UPCRAOE<br />
.,OMt BfHAVlORAl OBIECTIVFS<br />
"DID 1lTTlf BUT PARROT<br />
POINTS OF VIEW STRESSED BY<br />
AOMINISTRATORS OR TRAINERS'<br />
‘J!hki,!Wihl FACUITY<br />
SELF DEVElOPMENT
.:<br />
_._’ _<br />
. . .<br />
.:.-*..‘. . -<br />
-<br />
: _-.<br />
352<br />
SLIDE 21
;<br />
,<br />
i<br />
f ’<br />
i t<br />
I.<br />
:,<br />
‘<br />
c<br />
REASON 9. "ACCOUNTABIIITY<br />
ir(~jkjjh:,ii! IN REVERSE<br />
INSTRUCTORS WANTED TO<br />
TEST STUOENT KNOWlEOGE<br />
LEVElS FREQU&NJlY TO<br />
AVOIO BLAME FOR fAIr*S-<br />
h4AhAbtMtNI USE<br />
KNOWlEDGE TEST AS<br />
INSTRUCIIDNAI STRATEGY<br />
USE PERfljRMANCE TESl AS<br />
GRAOUATlOic REOUIREMENT<br />
EXPERIENCE AND IV'IANAGEMENT<br />
--<br />
a 51:<br />
-<br />
ACCOUNTABlllTY<br />
FRAGsMENTED<br />
SELF PACED PACKAGES<br />
ARE GROUP PRODUCTS<br />
---<br />
‘+‘AhAL1M1HI CONTINUE AS<br />
PlANNED<br />
! !I II I s v I 1 1 i<br />
: XP! Hli h! I!’ NOT A<br />
PROBLEM<br />
TEACHER WAS<br />
'FACtlIT ATOR<br />
HAhAC!Mthl IN PARENT<br />
TEACHER PUP11 "MINIBOARO<br />
GOVERNANCE. PARENTS b<br />
PUPiS ARE FlNA:<br />
AUlHDRlTY ON NEED FOR<br />
CHANCF<br />
.<br />
v<br />
7<br />
-_
: ’<br />
. .<br />
1<br />
t<br />
I<br />
,! .<br />
f<br />
/<br />
\<br />
.<br />
SLIDE 23
.<br />
REASON 10: “GENERATE OBJECTIVES’*<br />
EXPERIENCE AND MANAGE<br />
USAIS<br />
EXPERIENCED: YES<br />
INSTRUCTORS DESIRE0 TO<br />
TEACH, LEAVING DOCUHENT-<br />
ATION AND TESTING TO<br />
OTHERS.<br />
MANAGEMENT: SUCCESSIVE<br />
APPROXIMATIONS: TEACHING<br />
AND TESTING ARE TWO<br />
ASPECTS OF THE SAME-<br />
THING.<br />
EXPERIENCED: NOT REPORTElI<br />
MANAGEblENT: ALL NECESSARY<br />
COMPETENCIES INCLUDED IN<br />
INSTRUCTIONAL OBJECTIVE<br />
PACKAGES.<br />
MEMORIZATION AND RECITA-<br />
TION OBJECTIVES WERE MORf<br />
BEMAVIURALLV ORIEHTEO<br />
THAN WERE “HUblANISTIC”<br />
OBJECTIVES.<br />
#ANAGEYENT: WORKSHOPS 11<br />
BEHAVIORAL OBJECTIVES.<br />
PLATEAU PHASE: RELAX<br />
STRESS ON BEHAVIORAL<br />
OBJECTIVES.<br />
INSTRUCTOR SELF<br />
MATURATION.
.<br />
‘.<br />
.<br />
. . I... _ . _ -<br />
1 1 .<br />
-0<br />
,<br />
” I# EVAlUATI#G THE WON4 OF I~SYRUCYIO#AL SCHE<br />
-;<br />
IT IS OFYEH THE U~At4YlCIPAYED RWJLYS WHICH ARE<br />
-+-+ REA~Y-l~38RYA~Y-,--~UY PRESPECIFIED GOAL!<br />
,I’ .<br />
..I<br />
MAY MAKE T H E EVALUAYOR IDBAYYE#YIVE T O THE<br />
-n 1, -.. _<br />
--.- 1<br />
-.<br />
U#FORESEE#. ”
-..<br />
\ .<br />
I._<br />
.<br />
. . .<br />
L<br />
.\.<br />
i *<br />
c<br />
.<br />
.‘, ”<br />
T “7.<br />
-. ,<br />
/.<br />
;<br />
,’<br />
/<br />
55<br />
+&. ‘y.<br />
c C.”<br />
;’<br />
--<br />
:<br />
\<br />
.!<br />
,’<br />
i,<br />
I.’<br />
I<br />
I<br />
I<br />
:.<br />
. .<br />
I<br />
..’<br />
.’<br />
.<br />
“v‘<br />
RfASON II: “UNFORESEEN EVENTS”<br />
USAIS<br />
EXPERIENCED YfS<br />
‘-PtiilOSOilil-CA1 ClASH OVER<br />
ROLE bf IOMNUTE QUIZ.<br />
T.-<br />
MANAGf Yf NT SEPARATE DUizZf!<br />
INTO TESTS OF COI4Tf.k MD<br />
TESTS OF JOB TASK<br />
ACCORaPLISHblENT<br />
EXPERIENCE AND MAFdAGEMEPlT<br />
EXPERIENCED YES ’<br />
STUDENTS LACK SELF<br />
DISCIPLINE TO COklPllTl<br />
SELF-PACED PROGRAM.<br />
STUOENTS MUST ADJUST TO<br />
UNREIINTING PRESSURE FROM<br />
PROGRAM START TO FINISH.<br />
SLOW. A60MZINC ADJUSTMEN<br />
BY STAFF AND STUDENTS.<br />
uANAGtU[NT HllP STUDENTS<br />
PLAN WORK SCHEDULE.<br />
loulsvllll<br />
EXPLRIENCED: IMBALANCE IN<br />
SPECIFYING TEACHER AND<br />
CEARNER BEHAVIORS.<br />
b!ANAC~#ENT: TEACHER SELF.<br />
MANAGEMENT:
,<br />
.<br />
:<br />
CRITERIOX REFERESCED PERFOftWNCE TESTING<br />
IS COYIAT AR.% SKILLS<br />
, John F. Hayes<br />
UR,S/Katrix Company<br />
~iy presentation today represents.a progress report on the work that<br />
I-IRS/Matrix Company personnel have been doing in the area of performance<br />
proficiency testing. While we have been working in this area for over<br />
four years, it is specifically our current work for :he Army in the<br />
measurement of combat arms skill proficiency that I want to focus on today.<br />
This work is being done under contract with the Army Research Institute.<br />
Dr. Frank Harris, Dr. Robert Root, and Kajar Larry Cord are the AR1<br />
personnel monitoring this work. Other project personnel from URS/Xatrix<br />
include ?lr. Ray Griffin, Dr. Boyd Nathers, and Hr. Don Jones.<br />
As background to this description, I would like to present a general<br />
framework of performance testing that has evolved from our work and the<br />
work of others in this area. Performance testing itself, of course, is not<br />
a new concept. It has long been recognized as one of the more important<br />
techniques in the area of training assessment. It has, hol;ever, traditionally<br />
presented problems in terms of the administrative load that it imposes on<br />
the testing function and in terms of test reliability. 'It takes a great<br />
deal of time, support materials, and it requires numerous personnel to<br />
execute. Siill, it represents the most valid means of assessing proficiency,.<br />
This is particularly true with respect to training for specific job<br />
assignments which is a major concern of the military servica*s.<br />
Xost general training development models tail for the development of<br />
.<br />
criterion tests early in the training development process. Criterion tests<br />
developed from such models are based not upon the training content being<br />
developed, but upon the same job requirement information that was generated<br />
I .<br />
!<br />
358
_/’ ‘.., ‘/<br />
I I<br />
for the training material development itself. The test development<br />
(<br />
effort, then, is a parallel and concurrent but fndependent effort. In<br />
this way the criterion tests measure whether the student’can perform<br />
I<br />
criterion job functions upon completion of train’ing, not just whether<br />
or not he learned what was in the course.<br />
“Criterion referenced” tests as we are employing the term means<br />
:I<br />
more than scores measured against fixed as opposed to relative standards.<br />
By criterion referenced we also mean that skills are measured in the<br />
context of the application situation as completely as possible. By<br />
criterion referenced we mean that the tests are designed so that<br />
performance on them represents a true measure of job performance, rather<br />
than measures of the possession of skills presumed to contribute<br />
positively to job performance. In the ‘area of Infantry and Armor combat<br />
skills, there are no acceptable job performance conditions against<br />
which to verify or validate evaluation’mcasures. In other areas in<br />
I<br />
which we have worked ,,electronic and mechanical repair maintenance, vork<br />
I I<br />
samples could be easily constructed.<br />
I<br />
,For the Infantry and Armor soldier<br />
1<br />
.: :t<br />
:<br />
this was not the case. The job of th combat arms soldier fs not to apply<br />
a specific set of skills on a repetit It is to recall and select<br />
required skills as needed by unique sJtuations and integrate and apply<br />
f I<br />
them under conditions of stress ranging from mild to drastically severe.<br />
What we were seeking then was to approach and approximate those conditions<br />
in our evaluation of his job profit 1rncy.<br />
In our development of criterion referenced performance tests, several<br />
guiding principles have become common td all of our developmental efforts.<br />
.<br />
359<br />
/ ‘,. ‘_<br />
,” i /’<br />
’ i’ 1, e<br />
“.
fie first is that we start by defining actual job tasks. These tasks<br />
come from the analysis of the performance of job incumbents. This<br />
information can be gathered in a variety of ways. It can be done through<br />
conventional surveys of job requirements and job duties, or by obser-<br />
vation of job incumbents in the performance of their duties, or it can<br />
be done through the “panel of experts approach”. The method used has<br />
to be selected based on the best source of information available. Such<br />
methods are not a central topic for today’s discussion; however, a<br />
fundamental premise of our test dev*lopmcnt apTroach is that through the<br />
use of such dat‘ .validitv is built into the test situation from the<br />
beainninq. The validfty of the performance test results, then, is not<br />
demonstrated by comparison with other indices such as supervisor ratings<br />
or other test score results that are more typically used for test<br />
validation. Content validity is estab?.ished as a precondition so that<br />
the final tests represent absolute criteria against which any other<br />
indices of job performance can be :hemselves validated.<br />
The second characteristic of our performance tests arc that meaning:<br />
ful size tasks are selected, c;hicb require skill integration and sclcction<br />
from a total repertoire bf application skills. In performing a job duty,<br />
it is important that the job incuzbcnt be able to handle all of the<br />
parameters of that job duty in the contest and sequence that he will be<br />
expected to perform on the job. i’nder our concept, performance testing<br />
is not just the isolated performance of defined skills, but would include<br />
the selection of’ skills to be applied at any given time based upon the<br />
.<br />
parameters of the sttuation. For example, a technician may be perfectly<br />
360
,<br />
capable of repairing a carburetor, and this could be tested specifically.<br />
It is also important in our view, that he be able to recognized when the<br />
carburetor needs to be repaired. That is, he must be able to isolate<br />
the carburetor as a malfunctioning component from among the myriad of<br />
other engine components. Further, the performance of this task should<br />
be done in the context of the operational environment. To the extent<br />
possible the individual should have to locate the problem, select tools<br />
and equipnent , use appropriate job documentation effectively, correct<br />
the problem, and, in general, demonstrate his ability to operate within<br />
the operational environment. This then, is uhat we mean by total ski-1<br />
integration.<br />
The third maijor characteristic of the performance oriented tests is<br />
that the measures that are used for gauging successful performance are<br />
product-oriented rather than process-oriented. When a man is functioning<br />
as part of a system he has inputs and outputs the same as any other<br />
element of that system, either man or machine. The functioning of the<br />
I<br />
system depends upon the quality and timeliness of the man’s outputs,<br />
These characteristics can be measured as can the outputs of any other<br />
component of the system.<br />
Some of the more compelling aspects of performance for measurement<br />
purposes are often processes and in nany instances the quality of such<br />
processes is often a matter of subjective judgemen: on the part of a rater<br />
D<br />
or evaluator. He will observe whether the performing individual did the<br />
task in accordance with prescribed procedures without questioning whether<br />
those procedures were in fact the only way of accomplishing the output.<br />
.- ‘* _--.<br />
361<br />
, -. - .<br />
..A ._.<br />
.<br />
,.’<br />
/<br />
.__. ‘/.<br />
_’<br />
.<br />
-<br />
,<br />
4’<br />
’<br />
,,.’<br />
I<br />
,<br />
:<br />
s<br />
/’<br />
’<br />
,,’<br />
/-<br />
,<br />
:<br />
:<br />
.-.-a<br />
,. -c<br />
_.a.<br />
G$&‘!<br />
*<br />
__.
In the design of our performance tests, we have conscientiously selected :v<br />
,.<br />
criterion measures which do not rely upon the (subjective jcdgement of the<br />
?<br />
rater. The guiding philosophy of this test development effort was that we<br />
do not care how a man accomplishes his job or ,his task. What we are<br />
I<br />
concerned with is whether he got the job donep i.e., produced the required<br />
outputs at the proper time. In this way, the testing procedures can<br />
remain independent of the,,specific processes that are prescribed.<br />
Applying ithese principles to testing combat skills of the Light<br />
Weapons Infantryman (XOS llB), we began with a definition of tke situations<br />
in which the individual fs experted to perform. These can generally be<br />
defined iq terms such as: assault and defense, reconnaissance patrol,<br />
defensive patrol, night infiltration, and anti-tank actions. The next<br />
step was to perform a behavioral :ask analysis, in cue-rcc--zs ‘S *<br />
to determine exactly what the individual soldier is respondin, to in each<br />
of these situations (the cues) and whft outputs (response) are expected<br />
of him.<br />
1<br />
One of the firs? characteristics, that became evident was that while ,’<br />
we were. talking in terms of individual proficiency and individual testing,<br />
the soldier seld~~m acts as an individual in the defined criterion situations.<br />
He is virtually always a member of a I larger unit, most often the fire<br />
team, operating as a member of a squ d. As we proceeded, it becarr.e clear<br />
b<br />
1<br />
that we had to test him in that context. ,<br />
Another critical factor that b came clear is that the infantryman is<br />
%<br />
operating most of the time against an intelligent opposing force, *
is neutralized. Initially :his element was consider- : **:* dyn? . .<br />
t,oo difficult to provide to be considered for inclusion in 3 tL *<br />
tmre we exained the situation, ;rovever, the cleare’r f: c~-r;h. * +.dt<br />
without this element:, atteqts to tesf :he co&at efricirncy .-f the<br />
individual soldier were going to be Loefully inadequate. ‘Lhile Ye<br />
investigated hit-indicator systeas such as laser beazs and rsdfo actfvatr<br />
equip&t, w ‘felt strongly that., to be effective az~d usable, any system<br />
tha: ve dc.Jised had to be free of ccuplicated equi-,=ent.<br />
Eventri;?ly LT devised a sys:ez which uses a t’-o-Cigit nu;=ber pl.lcrd<br />
on each i*:3?v-idual’r; heloet aztd rifles equipped wi:h tcl~scspes. h’hen an<br />
inraotrpan L’an Gistingztsh an o?ponent’s m.mber through his scope and<br />
‘fires a blank round $, it is ass-d that he would have been able to kill<br />
that individual. by plcci~g test controller personnel =ith each of the<br />
opposing elements and equipping the controllers with a radio, individuals<br />
can be knocked out of action on each side as they are effectively engaged<br />
by -the opposing force. The protlen is thus allowed to piay itself out<br />
or. a realistic basis. Bg experfzenting wi:h combinations of color,<br />
background, and size of the numbers and :he power cc the scope, a :onbina-<br />
tion of number- ~olou and umber-size, and scope poser, vas achieved vhicil<br />
duplicates closeI~r the effegtive range of the Xl6 rifle. For<br />
practical purposes this range is 200 meters. Xhile the actual range cf<br />
the .X6 is nuch langer, the combat situation seldozz pro+ +es accessible<br />
targets beyond that distance.<br />
The develbping of this scoring system opened zew possibilities to<br />
the use of combat situat’icns for testing purposes. Tine situations could<br />
j ”
now provide realistic cues which ilI’C! highly similar to the combat situation.<br />
They also provide the opportunity for the individual soldier to select and<br />
employ, in concert r;ith his fellow squad members, what he considers to bc<br />
effective responses. The required responses are not pre-defined; he has<br />
to select them from his repertoire, under stressful conditions, at the<br />
most appropriate time, in order to bc successful. Further, the situation<br />
will .r~~ change as a result of the individual acts taken on both sides.<br />
Uhcn the individual perceives that he is affecting the outcome, he becomes<br />
a very interested and enthusiastic participant in the exercise.- When the<br />
situation does change as a result of actions one takes, it becomes a<br />
measurencnt criteria. One of the most important elements of t!,e infantryman’s<br />
life is’ that fact that he must constantly respond to an environment that<br />
is changi.ng under very stressful conditions. And his responses cannot be<br />
isolated in’ terms of<br />
coordinated with his<br />
Techniques were<br />
effects of the other<br />
his own situatjons as an individual, but nust be<br />
fellow squad mcmbcrs, if they are to be effective,<br />
:hen developed for simulating and measuring the<br />
weapons aVailiI!)lC to the infar.trynm. The tests<br />
now permit the use of grenades, claymore mines, booby traps, anti-tank<br />
weapons, machine guns !, and artillery in realLstic fashion. As a result,<br />
the overwhelming opinion of experienced combat infantry personnel is<br />
that we have, in fact ,’ created situations which call for very realistic<br />
responses in conditions very closely approximating actu.11 cocbat.<br />
Having achieved this, it was then necessary to apply :he necessary<br />
controls to achieve individual testing within this context, The basic<br />
data-gathering instruments during these situations are the test controllers
previously mentioned. In addition to being equipped with radios in order<br />
to make the situations play realistically and to assess casualties,<br />
control personnel have responsibility for recording product outputs for<br />
each of the individual particfpancs. The con;roller keeps track of who<br />
scored what kills or was kill4 wfth rifle fire, cho threw grenades or<br />
was killed by grenades,, who called in artillery or was killed by artillery<br />
fire, who employed booby traps, mints or claymores, and who was killed<br />
by these devices. The mechanisms developed for keeping track of this<br />
information rely primarily on the USC of the identifying numbers on each<br />
individual’s helmet. Points are accumulated for each individual on the<br />
basis of the number of kills that he has scored, plus a factor added<br />
for the degree of success nchiavcd by this unit. The unit score fs<br />
determined on the basis of a set number of points for achieving the<br />
objective,. iess a set number of points for each casualty incurred. The<br />
individual scores are then lo&cd with a factor based upon unit success.<br />
At the present time, both the defense and the offense are being scored<br />
simultaneously. Therefore, the degree of success of one is diminished<br />
by the score of the other.<br />
There are, of course, many chance factors operating in an individual’s<br />
score, just as in combat there are many chance factors operating to affect<br />
one’s Rortality. A man can get killed by his own squad members, hc can<br />
be killed by an ill-launched grenade which happens to bounce off a tree<br />
back into his own position, or by calling artillery fire on himself.<br />
Some of these are his errors,sone are the errors of others. To overcome<br />
these chance fat tors , repetition. of testing is employed. Going through a<br />
I<br />
/<br />
365<br />
.: _.<br />
r
test situation one time will not necessarily produce an!indcx of the<br />
best members of a squad, or absolute individual proficiency scores,<br />
1<br />
Over the course of a series of repetitions of the test, however, it is<br />
hypothesized that proficient individuals will achieve higher scores<br />
than those who are not proficient. That is, the chance factors will<br />
gradually balance out for all participants. Furtherzmre, it is necessary<br />
for the situation to be run numerous times to provide exposure of the<br />
individual to different situations. Even rur.ling the same problem over<br />
the identical terrain produces an altered situation. The fact that no<br />
two problems ever arc run alike with exactly the sane results or events<br />
occurring is also assumed to be a valid construction in terms of the<br />
combat situation. EIach combat situation is, i,n fact, unique and being<br />
able to respond properly to unique situations iis the highest measure of<br />
a combat infantrynan's profic?iency.<br />
I<br />
This theri is what we mean by total<br />
I<br />
skill integration.<br />
To provide a better description of the s&ten in operation I have<br />
1<br />
some slides taken during our tryout of the Infantry tests at Fort Benning<br />
I<br />
which depict several of the criterion test s"tuations.<br />
P<br />
<strong>Testing</strong> situations were developed for each of the other defined<br />
criterion situations using the same guiding, principles as for the assault/<br />
defense. The individual was to receive tb i / same inputs, perceive the sama<br />
environment and make the same response as would be required by the<br />
criterion job. If there was a non-predictable enemy present in the<br />
criterion job he was provided in tits test situation. In the reconnaissanct<br />
patrol, for example, a squad *-as given an objective to recon, a map and an<br />
_. ~ * 366 'i
operations order describing what they might encounter in trying to gain<br />
intelligence about the objective. The defensive patrol on the other end<br />
was given a sector to patrol and protect against recon elements. They<br />
.<br />
could employ any strategy they chose, such as a fixed screen with booby<br />
trap, a moving screen, or any combination thereof. The problem was then<br />
started at a prearranged time and it ended when the recon element<br />
returned or was neutralized.<br />
Upon the return of the recon patrol all surviving members were tested<br />
by having to report what they saw at the objec:ive. The squad ras awarded<br />
points based upon the percentage of intelligence returned by any members.<br />
The objective of a recon patrol is to get accurate and complete lnrelligcnce<br />
about the enemy back to the base
The experimental design plan called for the,total pool of test items to<br />
,’<br />
be administered to a t:rained Infantry platoon that could be expected to<br />
know and be able to perform at a reasonable level of proficiency. The<br />
individual test items that correlated best with total test results were<br />
then to constitute the final proficiency test. Further, this tryout was<br />
to determine the feasibility of the scheduling and times allotted for<br />
each of the test items.<br />
As indicated earlier, no independent criteria were considered suitable<br />
as validation criteria, since no other criteria were considered to be as<br />
valid as the tests themselves. The elements of the test content were<br />
reviewed by numerous infantry experts and judged to be valid samples of<br />
the infantryman’s job. Therefore, agreement or disagreement with indices<br />
such as supervisors’ ratings or written test scores was not considered<br />
appropriate1 To determine whether the results did agree with commonly<br />
accepted judgcments, peer ratings were employed which asked for ratings<br />
of prcferende of individuals to do into combat with.<br />
Test Evaluation<br />
At this point in time we have only had an opportunity to take a cursory<br />
look at the results achieved during the tryout of the Infantry tests.<br />
Following completion of the Armor test tryout more complete analyses will<br />
bd conducted of all the data.<br />
The Infantry tests were administered to one Infantry platoon consisting<br />
of 35 people. Nine were Squad and Fire Team Leaders (designated as HOS llB.40)<br />
and 26 were squad members (llB.20). Formal test administration took one week.<br />
368<br />
.
.<br />
Prior to test administration pre-training was conducted to insure that<br />
personnel were familiarized with the proper employment of’ the rifle<br />
scope, since that was a non-standard item of equipment.<br />
::<br />
Several weeks were also spent prior to the test administration in<br />
training control personnel (XCO’s) in the details of test administration<br />
and in insuring that the Infantry personnel had sufficient minimum skill:<br />
to participate in the tests. This was necessary since the platoon we<br />
were furnished was newly formed with personnel just out or individual<br />
training &o had no unit training or experience. Since the tests were<br />
designed for journeyman level infantrymen, basic:skills in map reading,<br />
artillery adjustments, squad tactics, etc. had to be covered so that<br />
these personnel could be tested. I<br />
After the week of testing $he platoon membe:rs.were requested LO rank<br />
each other i‘n terms of whom they would mclst pre her to have with them in<br />
combat.. While they had participated with an against everyone in the<br />
I<br />
platoon, no one had any knowledge of the test sciore results at the time<br />
of this rating. It was based on what the<br />
I<br />
indiv duals<br />
f<br />
month of training and testing. /<br />
had seen during the<br />
The raw scores for the 2 level personnel (n=26) (squad members)’ ranged<br />
/i<br />
from 75-227 with a mean of 214.96 and a standsrd deviation of 48.26. For<br />
4 level personnel (n=9) (Squad and Fire Team Leaders), the range was<br />
158-434 with a mean raw score of 335 and a standard deviation of 96.34.<br />
The correlation of total +.est score with the peer ratings, computed<br />
by Pearson r, was .697 f?r 4 levr.1 personnel and .420 for 2 level personnel.<br />
Both are significant at the 05 level of confidence.<br />
369
.<br />
The Combat Performance sub-portion (two-sided) of the total test<br />
correlated with the total test score at ,942 and .927 respectively for the<br />
2 and 4 level personnel.<br />
Due to the built-in content validity of the test situations, the<br />
agreement of test results with peer rating data, and the spread of scores<br />
obtained, we-are satisfied that these situational performance tests<br />
are yielding an index of individual combat proficiency for the Light<br />
Weapons Infantryman.<br />
Armor Crewman Tests<br />
A similar set of criterion tests were developed for Armor crewmen.<br />
The duties of the Armor crew positions are well defined and they combine<br />
with the t&k to make a single weapons system. This is both a simplifying<br />
and complicating characteristic. The end product of a tank gun round on<br />
target involves the coordinated actions of four people which makes the<br />
job of ferreting outindividual contribution and proficiency more<br />
difficult. The proficiency of the tank crew as an entity is, and has<br />
been, a common level of distinction for proficiency measurement.<br />
To break through to individual proficiency we went to a process<br />
checklist type of evaluation within each situation when it was not<br />
possible to get product 'measures on each individual. The tank criterion<br />
situations include tank-to-tank engagement, night bivouac situations,<br />
infiltration, a river crossing, overcoming mined obstacles, bunkers,<br />
ambushes, and maintenance.<br />
370<br />
3
The primary scoring system used in tank engagement is to sight through<br />
the main gun via a telescope as with the infantry tube weapons. This<br />
provides immediate verification to the controller as to whether a hit<br />
would have been achieved. Hc can then take the appropriate action to<br />
kill the other tank (via radio) or notify the crew that the round missed<br />
and the situation continues. Against enemy infantry, the same scoring<br />
system is used as before that of reporting helmet numbers.<br />
The Armor tests are currently undergoing field cvaluaLion at Fort<br />
Carson, Colorado, and no results are available at this time.<br />
Conclusions<br />
Our experience to date in this project has reinforced our belief<br />
that performance testing could be employed to measure larger and more<br />
_.<br />
meaningful segments of job performance than has generally been the case<br />
to date. Certainly the control and data collection problems increase,<br />
but the increased validity of the exercise and the resulting increase in<br />
confidence of the results appears to be worth the costs. The change in<br />
emphasis is from having the individual demonstrate that he knows or can<br />
perform each individual skill required, to having him recall and effectively<br />
apply such skills as the situation demands. hhile there may be less control<br />
over which specific skills are tested, the sampling can be handled through<br />
both repetition and situation design.<br />
These tests are not yet in the final format. Refinement and further<br />
research is required to improve the quality of the data collected and to<br />
increase the discrimination power of the results. h'e strongly feel, however,<br />
/<br />
371<br />
I .,<br />
.:*.
that these are problems of technique and not the inheredt nature of perfor-<br />
1<br />
mance tests. The challenge is to pursue the refinemen! of techniques<br />
rather than back off to more manageable though less fruitful approaches<br />
to testing. Performance testing has the power of accurate job proficiency<br />
assessment and the challenge is to tap it.<br />
As to the increased costs in time and manpower, we feel that a change<br />
in attitude and commitment to the testing process is needed. If we now<br />
have the beginning of mechanisms for accurately determining the status<br />
of proficiency of our personnel, the resources should be made alpailable<br />
to employ it. Without the type of data that is avrilable from well designed<br />
performance testing, training efforts will necessarily operate at less than<br />
optimum efficiency. I<br />
372
THE.AIR FCRCE: WIFE: HER YSjChZEDGE OF, AND<br />
ATTIiUDE:S TOWARD, THE AIR FORCE*<br />
John A. Belt,Ph.D 6<br />
Arthur B. Sweney, Ph.D.<br />
Center for iiuman Appraisal<br />
Wichita State University<br />
This paper prepared fc’r presentation at the <strong>Military</strong> <strong>Testing</strong><br />
Asscciation Conference cn'"Iiuman Resources - Growing Demands,"<br />
Oct. 28 - Kov. 2, 1973, San Antonio, Texas. A more analytic<br />
treatment of the material presented in the paper may be found<br />
in technical report $103, "The Air Force Wife - A Study of Morale<br />
among <strong>Military</strong> Dependents" issued by the Center for Buman<br />
Appraisal, Wichita State University.<br />
*The Research reported in this paper was supported,by the Air Force<br />
Office of Scientific Research, grant fi 72-2001.<br />
z<br />
373<br />
t. _ _. - . . .-* ..-<br />
:. , -;:,r-,.
A major corporation gives gold charms in the shcipe of each<br />
new state to wives of transferred executives and it becomes status<br />
to point with pride at the number of charms acquired during a career<br />
span. R novelty? Public relations gimmick? No, rather a small part<br />
of the concerted effort in industry to make wives feel a part of their<br />
husband's career and sat.isfied with their place in this future. The<br />
industry motives are far from altruistic for as William Whyte stated<br />
in 1348 "as an econoic lever . ..companies have learned that there is no<br />
stimulus quite so effective as the wife if properly handled." Much<br />
research has gone into ways,to improve the recruiting and retention of<br />
civian employees and maximize their productivity. Eielfrich, (1965)<br />
concluded that "corporations arc increasingly interested in the wives<br />
of their executives." In an 197.1 study of business executives J.M. and<br />
R.F. Pahl typified the comments of their interviewees on the single<br />
most important factor influencing their career, "my wife, more than<br />
anything else."<br />
It would logically follow then that the military in many ways would<br />
not only follow this pattern but demand more from wives and family.<br />
And likewise, the commitment a man makes in choosing a military career<br />
would call for support and approval from his wife and family.<br />
As our society has become more and more mobile, the extended family<br />
of the past has been replaced by a more insular unit. No longer does<br />
a man have relatives close by or living in his home, for his many moves<br />
may take his family far from any familial ties. Without these other<br />
sources to rely on, the family tends to turn to each other and intcr-<br />
personal relationships attain a higher value in assessing each individual's<br />
f<br />
374<br />
_ . . .<br />
/<br />
I ,<br />
. _<br />
I<br />
-..
.<br />
satisfaction. The demands of a military life emphasize& this<br />
1<br />
phenomenon.<br />
One wife in a recent survey of military families related that<br />
her eldest child had attended 14 schools before entering college.<br />
(Entrails, 1971) How she views this style of Life, the rewards it<br />
offers her and her family must be weighed against the deficiencies.<br />
And the conclusions sha reaches concerning these matters will play<br />
an'imprtant role in her Husband's life also. He has drawn his<br />
family into a life style unique in many aspects; a subculture of<br />
over 4 million dependents. Roger Little (1971j has remarked on this,<br />
"All military families have in common knowledge and experience in<br />
an occupational culture ( or subculture 1 which is more distinct<br />
than that of other occupatior@ in the larger s,ociety." Base housing,<br />
I<br />
I<br />
required mobility, the status in the community of the military man -<br />
these are only a few of the pressures a man choosing a military career<br />
must weigh.<br />
I<br />
Add to these the similar pressures his wife and f?nily<br />
meet and one can see either an effective cohe ive unit or a -point of<br />
f<br />
dissention for a man trying to fulfill a rold as a husband and<br />
father.<br />
in a recent survey of career attitudes d among Air Force personnel, _<br />
it was found that the wife had twice the influence upon her husband'<<br />
career intentions as any other individual, including his immediate<br />
(Belt, 1972)<br />
supervisor or any of his peers. f A determinant as important as thj.s<br />
can't be slighted or ignored. ?n an investigation of junior officer<br />
retwtion problens, Lund (1972) found that wives were the key variable,<br />
in the decision to separate or'remain in the army. A 1971 study emphasized<br />
that any good junior officer retention program must inclucie efforts<br />
_. , - I<br />
375<br />
i<br />
,<br />
I
exyar.8 t5is to excmi::e I-.cu her position as wife and her dt:ltpzdcs<br />
a~ a ncr%er cf tk n:.lz:ary !wives ;:I respozdino to questiL:nc ter:d<br />
361st Ft:lCI:t.
plen in the wir.9 duriq the time of t!x study, ovrq half of which<br />
WX-~ married.<br />
Rata collection<br />
A sericn .o f<br />
/<br />
,three questionnaires were adninintcrcd tc the<br />
subjects.<br />
The first questionnaire (SDS-l) vas developed as an csploratoq<br />
instrument to investigate t.L.c basic I-rcmise that wives of Air Force<br />
personnel do, in fact, have strong opinions about their husbarzd's<br />
military affliction. SD?.-1 contained questions from major c.xtago:-ies:<br />
Dcxoqrqhy; Rclatians wit!1 Facilities, I!~ncfitr, and Scr-.*iccx; and<br />
Ccneral Attitudes toward :Iilit;irv Life. ,111 open ended scc:ion for<br />
ccncral cnm..cnts *was alsc included.<br />
j<br />
As a pilct study, .Ci,3-1 was Ji*triAitrd<br />
a<br />
/<br />
to the Iics~~itnl Squadron<br />
to check on its aFplicabiiitg to the :\r Force lift style. Q-n<br />
analysis, the instrument was determined to he applicable.<br />
I<br />
The first qucstior.i~irc was adniEistercd by distributing the forrrs<br />
to all the married IXI: ln the wing,<br />
the instrlzTcnts to their wives.<br />
i<br />
!’<br />
I<br />
and rqucstinq t5e husbands to take<br />
I<br />
Returrj envelopes were Frovidcd to<br />
facilitate‘tke return of the completed forms. A total of 627 copies<br />
of SDS-1 were distributed on April lG, 1971. Of these, 264 -<br />
completed forms were returned, creating a reswnse rate of 31.9%.<br />
Wives Attitude Survey II (t;.\5 II) was generated for two major<br />
purposes. The first section was aimed at investigating in depth trends<br />
that were evident in the oFen ended response section of rhc first<br />
survey. The second aim was to measure more Freciscly the relations the<br />
wives had with the facilities, benefits. and services. Again, the<br />
instrument included a demographic section and an open rcspnse section.<br />
+-- I<br />
t,<br />
t, R<br />
+ \<br />
,. .’
Ii<br />
A listing of al.1 hoxw addresses iJcre abtaincd throqh the<br />
Cor.solidatcd Ut~,:t Pcxsor:r.cl O:‘i~cc. The scmr.d survey was ~rril~?<br />
directly to the wives cn YarCh 13, 1372. Ikludrd in Khc2 nA1ir.g<br />
were S:anFCd, self ,Iddrcsc& return envelop::.<br />
t.hesr hustar.dn scrvir: t il: thcs Air Force as 6cllows:<br />
r!i s:;~I.L C:rqd ‘- rr:n Aat arc assigned tc cperat.icr.al Cc:::bat Mssiic c’rc~:.<br />
i<br />
!:on Xlssi ?e Crcx - r 4:n ttat are not assiqrxd to Conlt.3: Xissilc Crews.<br />
Career - nen w:th fi-.-c years or more of tint in service.<br />
,>ff icar - zcn of risk 0-l or above.<br />
!:nl asted - am-. of ra::k U-9 or below..<br />
First Tern cfficer - ner, with four years cr less cf tine in service<br />
wi tk rank O-1 or akvc.<br />
2<br />
378<br />
: . ._<br />
: -.:
Career Officer<br />
- - men with Five years or more of time in service with<br />
rank of O-l or abova.<br />
First Term Enlisted - men with four years or less of time in scrvicc:<br />
with rank E-9 or below.<br />
Career Enlisted - men with five years or more of time in service with<br />
rank E-3 or below.<br />
The dyads selected for analysis to determine intergroup differcnceu<br />
were: Elissile Crew-Non Missile Crew; First Term-Career; Officer-Enlintcdr<br />
First Term Officer-Career Officer; First Term Enlisted-Carccr Enlist&,<br />
First Term Officer-First Term Enlisted; Career Officer-Car'Mr Enlisted.<br />
Factor Analysis of Wives' Attitudes<br />
The Factor ,Analysjs performed on s'ection of KAS IT qcncratcd ten<br />
factors. The contributi::g variables and their loadings for each of the<br />
factors arc shohn in Appendix 1. The first factor was described as<br />
passive alientation/integration. Tt car.oted a passive role for the wivc$n<br />
of Air Force personnel. They apparently did not feel that their<br />
participation was rcquircd or even solicited, yet neither did they fcul<br />
that they were rejected or prevented from becoming involved. This<br />
dynamic displayed the disavowal of personal responsibility for inteqcntion<br />
into the Air Force lifestyle and an attitude of simply “floating along<br />
with the current."<br />
yactor II was identified as a desire for information versus apathy.<br />
This continum was between a desire for more information about how the<br />
Air Force affects her life and the apathy which is present in all talks<br />
of Life. These feelings were active in that the dynamic stretches from<br />
apathy to curiosity. It also appeared that there was a recognizable<br />
solidarity of interest in what the Air Force was and did.<br />
379
The dynamic at work in Factor III could rojt a,-tly !~c\ve been<br />
I<br />
labeled fanilial naturity and independence frm the Air I'crce/<br />
familial imaturity and dependence on the Air +orce. The dynamic<br />
was one of qrowtt, erred change iz focus of attchtior.. tA.5 tt1c far.11:<br />
matuxcd, the wife bccane norc'!icterested in its devclo1,~~r.t and less<br />
lr,tcrestea . :n her relatiohshil: to the Air Force.<br />
f3ctar ;V provrucc an ihsiqht into the wives' ~ercc,!~tior; of<br />
the currcht societal trend of dista-ce for the. r?ilitarv. The cobtim:<br />
traversed the area bttwccn i:rideful identification with her s;;cuser;'<br />
jcb nr:d the couple's relaticmshig to the Air Force to apolnqr.tlc<br />
rezoz:itioh ar.d rejection of identification with the Air I'crce.<br />
T!;c dt:sCrilltloh 0 f Iactor V *das a facet of wives' atti;udc<br />
ti:at war r.cc<br />
I<br />
directly related to the Air F;orce as a fucctioning<br />
or+3r.ization. Instead, it was ar, indication of the grcu? identifi-atic!:,'<br />
I<br />
disassociatioc m..onq the bwives of Air For& ~trsor.r.cl.<br />
1<br />
This within<br />
Trcur bipclal-ity was ayparehtly a wry inrortant realit\* cf life ir;<br />
9.e Air Force corui,ur.ity.<br />
I<br />
The dynsxic cxy?oscd in Factor "I was t'ifficult to intcrprct.<br />
out rcf6xlir.q to causslity, it appeared t':ut Fhysical/'~sychological<br />
!<br />
separation CT proxicity/ideatificaticn attitudes were prcaeht in the<br />
sarplc.<br />
/<br />
Factor VI: [*r&ably revealed an exFereatia1 attitude set about<br />
regulation/restriction by the Air Force. The dynamic bctvetn wives’<br />
feeiincs 0,5 yersoml. frcedon of action and restriction of action was<br />
quite cbvious.<br />
380<br />
With
The eiqhth factor was interpreted as two varying perceptionsof<br />
the adage "rank bath its priviledges (power)". It was based on the<br />
perceived transference ol.c the hurband's rank to the wife. The<br />
dynamic was primarily cne of Ierccived lzower in the trancfcrence<br />
as opposed to no power. The younger, lower rank, wives felt more power<br />
was inherent in the transfer, while the aider, higher ranking wives<br />
vre frustrated by the erosion of their preconception with the<br />
realization of the lack of power that the bust-Ads' rank gives to<br />
his wife.<br />
Identificatic.1 with the source cf information about the Air<br />
Force was the interpretation cf Fxtcr IS. It became apparent that<br />
as tt.. wife lcarnczl mere ab0-t t!:e system with which she is involved,<br />
she identified mom closciy with the source. The poles of this<br />
continum were the .hir Fox-cc itrc-If ar.6 her SpCGSe.<br />
The last factor was vicvcd as the conscious commitment of the<br />
wife toward invclvcment or,coninvolvcmrnt. The frustraticn dimension<br />
did not enter into this dynamic, as participation was directly related<br />
to the personal decision of the wife.<br />
Discriminant Function Analysis of Selected Groups and Subgroups<br />
-.<br />
(Note : The na::urc of the data array required that separate<br />
8<br />
discrininant fundtion ,. analysis be Ferformcd on each of the<br />
three ir‘struments. Tke voluminous nature of the resulting<br />
analysis (21 pages cf tabies)precludes ar.y but a curscry<br />
treatment of the findings here. The interested reader<br />
is refcrrcd to the tech. report 3103 "The Air Force r;‘ife -<br />
A Study of Morale among <strong>Military</strong> DeFendc~ts" issued by the<br />
Cerrtcr for Hman Appraisal, Wichita State University for a<br />
t<br />
381<br />
. a . . -<br />
“7 -.-. -<br />
.’<br />
f@@
detailed ar.d comprehensive analysis of this data.)<br />
hushrds were ic Cornhat !! issile Crews were discrimixbly different<br />
than the rest of the test pplation. The mai:: facto1 underlying<br />
t ki i 2; ;iiffere;lCe appeared to kc! a Lack of association bttwccn thr<br />
wife zd ti:e Air Force, cnphasized by a r.cnati.ve attituh CF. the<br />
;:art of tt.r &:insiJc Crew wives toward facilities, tcnefits, and<br />
services cffered by tht? Air Force.<br />
*t '!a;. * :",~p:-",~!bc< ,a<br />
. . - The ra::y chtairti diffcrccccs between the<br />
7-r . e-s t Tcrx-. and Career qx-cups cculd bc et~cctcd because of t!?c<br />
(;i>,Pj c::.: aqt- ar.d time iz t.hc scrvlcc differentials. The first<br />
r c. :: d
that the career wFvcs identified more ciosrlc *..-ith t!lc Air !'orce,<br />
knew more about it, ai. c:cnerali\q ndir.t&incd A stronger rclatioliship<br />
with the Air Force. The findinqs that the career wives were more<br />
knowledgeable was further cmph~sixd by the fact that tt;c i'irst Term<br />
Officer group was more influcncabl 5~ military cropaqanda.<br />
First Berw Fr 5?:sL&--lr Free.<br />
I<br />
�<br />
����� � ���� �� ����� ������ �� ������� �������␛<br />
����� ���� � �����<br />
☺��� ���<br />
�<br />
i<br />
- The
Although numerous compiaints were voiced, the wives for the<br />
most part had =S favorable attitude toward the miiitary lifestyle<br />
and their participation in it. 12 fat:, most of the wives expressed<br />
a desire for more information about the Air Force. The study<br />
indicated the average AF wife knoxs very little about the<br />
facilities, benefits and services available to her.<br />
Nevertheless, the depende,lt briefings, designed to offer this<br />
type of information, seemed to be an irritant to some wives,<br />
especially the younger group. A fairly con%on complaint about the<br />
dependent briefings was the "imFersona1" manner in which the wives<br />
were treated. Many also disliked the te.rm "briefing:' The term<br />
seemed to accentuate the military atmosphere of the meetings.<br />
Because of this attitude, many of the wives said they did not attend<br />
the dcpcndcnt briefings regularly, which forced them to look for an<br />
alternate information source.<br />
This alternate source ttsuali$ was the husband. But he too was<br />
often an insufficiant source for information. For one thing,<br />
the husband doesn't know the kinds of things his wife wants and needs<br />
to know and doesn't apFaar to be interested enough in these areas<br />
to find out.<br />
,<br />
,<br />
The information factor shiuld be of major importance to AF officials<br />
i<br />
a~ the survey also indicated the wives, as a group, tend tc identify<br />
and form attitudes about the AF based on the information they received.<br />
However, the survey also indicated that coercion would NOT be a good<br />
384
means of attaining the desired goals. That is, requiring wives to<br />
attend dependent briefings, undoubtedly would have a very negative<br />
effect.<br />
As night be expected the wives of men with long tern associations<br />
with the hF tended to have a prideful identification with the Air Force.<br />
Some of the younger wives, however,seened apologetic about their<br />
husband's military affiliation. This nay be due to the recent negative<br />
societal view of the military amor.g some young people.<br />
The survey indic:ated that the wives who identified with the AF<br />
lifestyle tended to live close to the base, and those least interested<br />
in that lifestylelivedfurther from the base. However, it is r.ot<br />
possible from this study to deter&nine which is the cause or which is the<br />
result of this phenomenon.<br />
It was also found that wives with no children tended to depend<br />
on the military to give structure to their lives. However, as children<br />
came into the family unit, the wife changed her vie+oint and became<br />
more involved with her family and less involved with the military.<br />
An interesting phenomenon revealed by the survey was the concept<br />
some of the wives had about rank transference. It appeared that wives<br />
of lower ranking men perceived a great deal of power transferred to the<br />
wives of higher ranking men (both officer and enlisted.) However, when<br />
the men rose into the higher ranks, the wives realized there was very<br />
little real power transferred to the spouse. The frustration seemed<br />
to increase as the husband's renk increased and the realization that<br />
she had no real power became more evident.<br />
*<br />
/<br />
. .-. I.-<br />
_ .<br />
. .<br />
r
The various group comparisons made in the study revealed that<br />
the feelings and attitudes of the wives tended to be gioup<br />
specific.<br />
For example, the group of wives with husbands in missile crews<br />
had the most negative feelings about the Air Force and its benefits<br />
and facilities. Thiswasnot surprising, however, as our previous<br />
Career Attitude Survey showed missile crew members to be among the<br />
&east satisfied men in the AF.<br />
One factor that may have contributed to the poor attitude of the<br />
“missileer’s” wife was the fact that her husband's duty required him<br />
to be separated from her overnight several nights a week. This<br />
factor may be compounded by the fact that most missile crew members<br />
I<br />
were first-temers and were relatively young. /The wife's younger<br />
I<br />
age may have tended to make her less understanding about her husband's<br />
*<br />
reoccuring -absence.<br />
Wives of the career group men were more familiar with the bene-<br />
fits available to them and had a more positive attitude toward them<br />
than the first term-wives.<br />
I<br />
In comparison to the enlisted wives , the i wives of officers were<br />
basically more socially oriented and participated to a greater degree<br />
in the AF centered community.<br />
t<br />
/ -<br />
The career officer wives identified more closely with the A?, knew<br />
more abut it, and generally had a stronger relationship with the Air<br />
Force than the first-term officer wives. The first term officer group<br />
also seemed to be ,I.ore readily influenced by military propaganda.<br />
. .<br />
, *<br />
386<br />
1
The difference between the career officer wife and the career<br />
enlisted wife appeared to be one of rank. The career officer wife<br />
felt more socially interested in the AF and generally was more<br />
associated with +he AF. Also, t.he officer wife did not appear to<br />
approve of a program or benefit merely because it was 'Air Force,'<br />
BY dividing the wives into four groups it was possible to rank<br />
them according to their feelings about the AF. The career officer<br />
wife was the most favorably inclined towards the AF, followed by<br />
the career enlisted, first term officer, and first term enlisted<br />
wife.<br />
IX?LICATIONS<br />
Overall, it would appear that, because of her obvious effect<br />
on the husband, the AF wife's somewhat positive feelings about the<br />
Air Force and her desire for more information are desirable effects.<br />
Her apparent desire to become more of a part of the military<br />
centered community seems'to be blocked because of a lack of an easy<br />
aveneue to do so. The results seemed to indicate that both she<br />
and the Air Force lacked the real initiative to draw her into the<br />
community.<br />
The apparent; failure of the dependent briefings in providing<br />
information to the wi'vcs should be given some attention. It seems that<br />
i<br />
a more "feministic" approach is desired by the wives.<br />
In conclusion, it seems evident that the AF, as well as the other<br />
services, is ignoring a segment of its community that is an extremely<br />
powerful factor in retention and possibly an egually powerful influence<br />
on job performance and morale azzong the men.<br />
t<br />
387<br />
2<br />
_. --. _ _- _,<br />
-: . .
REFERENCES<br />
Belt, John A., and Parrott, Gerald S. The Relationship of Satisfier;-<br />
Dissatj.sfiers In A <strong>Military</strong> Unit t0 Re-enlistment, paper prC?Sentec. at<br />
Inter-University Seminar, Chicago, Illinois, September 21,-2;: 1972.<br />
Eutrails, research newsletter published by Ce.?ter for Hunan Appraisal,<br />
Wichita State University, Vol. 1, NO. 2, NOV. 1, 1971.<br />
"Eternal Triangle . . . Man, Wife, and Work". Industry Week, 168:4.<br />
January 25, 1972.<br />
Farber, Leslie E. "He Said, She Said". Commentary, 123:24. March,<br />
1972.<br />
Helfrich, M.L. The Social Role of the Executive's Wife, Bureau of<br />
Business Research, Ohio State University, 1965.<br />
Little, Roger W. "The <strong>Military</strong> Fanily". Handbook of <strong>Military</strong> Insti-<br />
-<br />
tutions, 247-272. Beverly Hills: Sage, 1971.<br />
Lund, Donald A. "Problens'of Junior Officer Retention in the Volunteer-Army:<br />
The Case of the <strong>Military</strong> District of Washington".<br />
Paper presented at ,thc Workshop on <strong>Military</strong> Manpower - The All<br />
Volunteer <strong>Military</strong>. Inter-University Seminar, Chicago, Illinois<br />
September 21-23, 1972.<br />
Lund, Donald A. "Active Duty - Yes or No?" The <strong>Military</strong> Police<br />
Journal 20 (February): 15-17, 1971.<br />
McKain, Jerry L. "F,-eling of Alienation, Geographical Mobility,and<br />
Army Family Problems: An Extention of Thecry". Published Dissertation:<br />
Catholic University of America. National <strong>Technical</strong><br />
Information Eezvice" 1969.<br />
"Navy Wives" Perceptions of Conditions of Navy Life". Naval Persohnel<br />
Research and Development Laboratory Defense Documents<br />
Center, Washington, D.C. March, 1971.<br />
Triebal, Joane. "Your Wife: A Prisoner of YOUX Success?" Nation's<br />
Business, 73:13, June 26, 1972.<br />
Whyte, William H., Is Anybody Listening? Doubleday, New York, 1948.<br />
r?<br />
/ *<br />
388<br />
-.<br />
; ‘-.
.<br />
: !<br />
APPENDIX 1 I<br />
0 I<br />
I<br />
359<br />
- i...<br />
3,<br />
%<br />
_. . .- -<br />
f<br />
i ‘,
Factor I: Passive alienation/integration<br />
Variable Factor<br />
Number Variable Description Loading<br />
4.<br />
13.<br />
7.<br />
10.<br />
19.<br />
24.<br />
5.<br />
The A.F. doesn't care what the wive's of its personnel -0.689<br />
thick.<br />
The it.F. doesn't care what I think. -0.884<br />
Thcrc are no procedures for me to express my feeling -0.788<br />
ahut A.F. policies.<br />
Civilians don't respect military personnel. -0.570<br />
The A.P. keeps the wives of its personnel well informed. 0,559<br />
Life AS an A.F. wife provides me..many opportunities to 0.402<br />
gc t I nvolvcd .<br />
A.F. wives should be kept better informed of base act- -0.369<br />
;vitics.<br />
Factor II: Curiosity/apathy<br />
Variable Factor<br />
Number Variable Description Loading<br />
15. If I understood it ITore. I think the Air Foxce would 0.739<br />
be interesting.<br />
14. I would like to be invited to attend my husbands re- 0.707<br />
enlistment inter3ier.s.<br />
9. I don't want to know more about my husband's job than -0.701<br />
I nlrcady know.<br />
8. Base activities have a direct affect on me. 0.526<br />
5. A.F. wives should be kept better informed of base 0.374<br />
. .<br />
activities.<br />
11. Most wives think they have their husband's rank. 0.322
Factor III: Familial maturity/immaturity<br />
!<br />
'I<br />
Variable Factor<br />
Number Variable Description L,adincr<br />
51. How long have you been in the service (4-%yrs.:S.) . 0.860<br />
53. How many children do you have? (1.2) 0.788<br />
52. Do you expect your husband to make the A.F. a career -0.727<br />
(probably)<br />
22. The A.F. should not be a 24-hour a day job. c.590<br />
24. Life as an A.F. wife provided me many opportunities -0.480<br />
to get involved.<br />
1. Wives of A.F. personnel should be involved in formul- 0.360<br />
ating A.F. policy.<br />
11. Most wives think they have their husband's rank. 0.305<br />
1<br />
tt*****t**t*C*******~***~,b*~~~~~~~~~~~*~~*****~**~~**~*****************<br />
. .<br />
w<br />
i<br />
I<br />
. - IV: Prideful/apologetic identification<br />
I<br />
Variable .I Factor<br />
Number Variable Description Loadin%<br />
20. A wife should be proud of her husband's/profession. -0.889<br />
25. The A.F. should not be just another jo 4 . -0.553<br />
5. A.F. wives should be kept better informed of base -0.413<br />
activities.<br />
,. , *<br />
/<br />
.’<br />
391<br />
!<br />
I<br />
/<br />
/<br />
..~ :,. ., .
-.<br />
Factor V: Within subculture identification/disassociation<br />
Variable Factor<br />
Nunber Variable Description Loading<br />
18. A.F. wives have a lot in COtx!oII. 0.825<br />
12. A.F. wives have a number of siziliar problems. C!.722<br />
17. I often feel I am a<br />
2. I njoy associating<br />
21. I know the wives of<br />
fairly well.<br />
member of the A.F. 0.702<br />
with other A.F. wives. 0.565<br />
mcmbcrs of my husband's unit 0.540<br />
24. Life as an A.F. wive provides my may opportunities 0.474<br />
to get involved.<br />
. .<br />
Factor VI: Phjjsical d Fsychological prox*ity/scparation<br />
Variable<br />
Factor<br />
Number -. Variable 'Description Loading<br />
54. How far 20 you live from base? (3-5mi) 0.8C5<br />
1. Wives of Air Force personnel should be included in -0.370<br />
formulating Air Force policy.<br />
‘<br />
Factor VII{ External restriction/freedom<br />
Variable<br />
Number Variable Description<br />
6. Wives of Air Force personnel are free to do what<br />
they want.<br />
. _<br />
e - :<br />
Factor<br />
Loadinq<br />
0.790
Factor VIII: Rank transference aspiration/frustration<br />
Variable Factor<br />
Number Variable Description Loading<br />
55. What is your husband's rank. 0.632<br />
24. Life as an Air Force wife provides me many opportun- -0.336<br />
ities to get involved.<br />
11. Most wives think they have their husband's rank. 0.367<br />
Factor IS: Identification with information source<br />
Variable<br />
Number Variable Description<br />
Factor<br />
Loading<br />
26. Where did you learn the most about the Air Force. 0.680<br />
1. Wives of Air Force personnel should be involved in 0.270<br />
formulating A.F. policy.<br />
14. I hvuld like to be invited to attend my husband's 0.268<br />
reenlistment interview.<br />
17. I often feel I am a member of the Air Force. -0.250<br />
f<br />
393<br />
.‘. ‘.<br />
-_.
-I -<br />
-.<br />
. .<br />
. . . ‘.<br />
Variable<br />
Cumber<br />
,f<br />
iI<br />
/--cm< ____ ^ .-.- .____.__-’ -c- -,-, - -y -<br />
1<br />
Factor X: Personal latitude for involvamcnt/non-i::volvcment<br />
-<br />
Variable Description<br />
I I<br />
i<br />
1<br />
Factor<br />
Loading<br />
2 3 . Air Force benefits do not ir.terest me a great deal. 0.684<br />
1 6 . I don't care hut Air Force policy except as it 0.581<br />
affects mc.<br />
5 . Air Force sho;lld be kept better inf#*rned of base -0.442<br />
activities.<br />
1. Wives of Air Force personnel should be involved in -0.392<br />
formulating Air Force policy.<br />
,<br />
.’ 394<br />
:.