Unified model of forces using Preon Model #8

Assume an overriding aim that the four forces of physics are unifiable.  Therefore all the forces must have a lot in common as the original or ancestral unified force must be a coherent and fully-functioning composite of all other forces.

QCD is colour-based, with an SU(3) group structure while QED is colourless [or rather is a neutral mix of colours] with a group stucture embedded within an electroweak structure. The group structure of SU(3) allows QCD-colour to behave like light in that one colour contains others within it.  Specifically, adding Red + Green + Blue gives white.  Re-using this suggests QED white charge is an amalgam of QCD colour charges.

It is clear that a red up quark has a positive charge while a red down quark has a negative charge. This is a major stumbling block for accepting that, at my preon level, QCD colour (e.g. red) has QED electric negative charge while QCD anticolour has QED positive electrical charge. QED particles are colour neutral having either white (negative) or black (positive charge). In a preon model, some aggregates of preons can be net white or net black and adding those aggregates into the quark can change the overall sign of the electric charge at quark-level.

The Weak force is harder to cope with in detail as it is affected by the higgs field, but basically weak isospin is an electric vacuum field and fits into a total composite unified force.

A graviton, acting on a mass charge with spin 2, has two incorrect features which need to be corrected. The first is that mass, as a charge, is not present in QCD or QED or Weak. And mass does not feature in my model as either a charge or a fundamental quality of a preon. Spin 2 seems to me only to be required to cope with having a plus sign always associated with mass. A spin 1 boson would repel two masses, so spin 2 is required to attract two positive mass charges. Eschewing spin 2 goes along with eschewing the mass charge. Using a spin 1 graviton acting on colour charges unifies gravitation with the other forces and allows the four forces to be integrated into a single composite force.

Gravitational colour is weaker than QCD colour so imagine an optical cable with very many strands representing QCD. Gravitation is enforced through just a few strands which have spit off from the main bundle.  The force of QCD is approximately one thousand million million million million million million times (that is 10^39 times) stronger) than the gravitational force so there needs to be that many more fibres in the QCD ‘cable’ than in the gravity ‘cable’.

In my model the photon, Z and gluon are three family members. Gravity, in my model, can be enforced by these same three bosons, so long as all elementary particles contain gravitational colour/anticolour, similar in colour format to the QCD colour contained in the gluon, but a thousand million million million million million million times weaker.

The three bosons can provide the same suite of features as QED and QCD but on a very much larger scale of distance (10^39 times larger). The QCD-like feature providing the generally attractive quality of gravity up to 60 million light years. The QED-like aspect causing dark energy repulsion at an even greater distance.

For more, see my vixra paper at http://vixra.org/abs/1709.0021

Hexark and Preon Model #8 and the Unification of Forces: a Summary

This paper summarises a model for building all elementary particles of the Standard Model plus the higgs, dark matter, dark energy and gravitons, out of preons and sub-preons. The preons are themselves built from string-like hexarks each with chiral values for the fundamental properties of elementary particles. The four forces are shown to be unified by hexarks being string-like objects comprising a compactified multiverse-like structure of at least 10^39 strands of string-like 4D space and time blocks (septarks). Despite the individual forces seeming very different from each other, they all derive from the same colour strands, either as net colour braids (QCD and attractive gravity) or as net neutral-colour braids/strands (electric charge, weak isospin and dark energy, or repulsive gravity). Different strength forces have different numbers of braids in them but QCD-colour is qualitatively, but not quantitatively, the same as gravitational colour while electric charge, weak isospin and dark energy are all qualitatively the same neutral-colour mix, but not quantitatively the same.

Posted in physics | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Breaking Bell’s Inequality using local, real, hidden variables

I have posted a paper on the vixra website at http://vixra.org/abs/1610.0327

with the title:

Correlation of – Cos θ Between Measurements in a Bell’s Inequality Experiment Simulation Calculated Using Local Hidden Variables

and abstract:

This paper shows that the theoretical correlation between elementary particles’ hidden variable unit vector spin axes p, projected onto Alice’s and Bob’s respective detector angle unit vectors a and b, in a Bell’s Inequality experiment, is – cos θ. This equates with the quantum correlation value and exceeds the “Bell’s Inequality” attenuated correlation in absolute magnitude. Further, aggregates of elementary particles’ hidden variable unit vector spin axes p, when projected onto appropriate detector angle vectors, give values which break the Bell’s Inequalities in exact accordance with values given by Quantum Mechanics calculations. On the other hand, Bell’s attenuated correlations correspond to correlations calculated without fractionalising the raw integer measurements A and B made by Alice and Bob. Also, when aggregating the raw integer measurements, the Bell’s Inequalities are not broken.

To summarise my findings. There is a 2 x 2 way of looking at my two computer simulations in this paper.

Exact vectors Fuzzy vectors on a hemisphere
My Simulation #1: correlations (a) The Quantum Mechanical correlation of 0.707 for theta = 45o cannot be directly calculated via QM but

(b) has been directly calculated in my Simulation#1, so there is nothing spooky going on here despite 0.707 breaking Bell’s Inequality.

The magnitude of this correlation would be equivalent to a CHSH statistic S=2.828 except that the CHSH statistic only applies to fuzzy vectors, not to exact vectors.

The Bell correlation of 0.5 for theta = 45o is the mundane attenuated correlation associated with failure to break Bell’s Inequality. Equivalent to a CHSH statistic of S=2.

The most recent CHSH experiment finds S=2.4 based on 245 pairs of particles.

See https://arxiv.org/abs/1508.05949


My Simulation #2: proportions (a)  There exist truly amazing QM calculations of proportions which are projections on exact vectors by Susskind which are accurately matched by

(b) my Simulation#2 calculations, so there is nothing spooky going on here despite them breaking Bell’s Inequality

See https://www.youtube.com/watch?v=XlLsTaJn9AQ&p=A27CEA1B8B27EB67



Bell proportions, mundane values which do not break Bell’s Inequality

The fuzzy vector column results are straightforward, and my simulations show these as failures to break the Bell’s Inequality. They both use fuzzy vectors on a hemisphere.  Fuzzy vectors on a hemisphere correspond to raw measurements by Alice and Bob at their detectors.  They have unit magnitude but the vector is only known to be pointing at either one hemisphere or the other.  Hence a fuzzy vector.  Proportions and correlations both use simple counts and sums of 1s and -1s to obtain the mundane results.  The exact particle vector direction is irrelevant to these ‘fuzzy’ calculations.

Next on to Column 1: exact vectors. Amazingly, QM allows calculations of proportions which break the Bell Inequality.  I used Susskind’s online example of a Bell Inequality in my Simulation 2 for this.

Even more amazingly, I obtained the QM values for these proportions very accurately using my real, local, hidden, variable Simulation 2!!!

What my simulation 2 does is start with the hidden particle unit vectors and use the standard dot product calculation of projections or loadings onto the detectors’ exact vectors.  These projections are not integer values in general and so fractional values are being added along exact vectors.  As this agrees accurately with QM calculations and I conclude that QM calculations are measuring the same thing.  No real surprise as these QM calculations use Projection Operators!

So far so good.

It is the correlations using exact vectors which causes all the problems of misunderstanding in this field.

I have found in my Simulation 1 correlation = 0.707 for theta = 45 degrees.

My simulation 1 starts with the exact particle unit vectors and again uses the standard dot product calculation of non-integer projections onto the detectors’ exact vectors.  So again fractional projection values are being added along exact vectors.

AFAIK, no QM calculation can go directly to correlation= 0.707 because it requires the knowledge of the hidden variables i.e. the particles’ exact directions, for individual particles, and these are never, ever known in a real experiment.  So calculating a correlation for an exact vector needs a lot more information than does calculating an overall proportion along an exact vector. Too much information for a real experiment.

However, my Simulation 1 gives 0.707 because I can generate artificial particles with known (to my computer) hidden variables.  So I and everyone else knows that the quantum correlation really exists for theta = 45 degrees and is 0.707.  After all, I have obtained it in Simulation 1 based on exact vectors.

So on to the misunderstanding. QM has no access to the hidden variables for calculating the quantum correlation, so it cannot do it.  But it is known with certainty that the quantum correlation actually exists.  So somewhere along the line over the years someone must have decided that the quantum correlation must be able to arise from the fuzzy vector data.  But this is a complete misunderstanding of the situation.  The situation is that the quantum correlation corresponds to a correlation on exact vectors whereas the CHSH statistic using real experiments is derived using correlation between fuzzy vectors.

This mis-match of what the correlations are measuring underlies all the supposed mystery of the quantum correlation.  There is no spookiness attached to these quantum correlations as I have simulated them non-spookily for exact vectors.  The only spookiness is why anyone should chase the supposedly spooky correlation in the wrong cell above, searching in the top right hand cell of correlations between fuzzy vectors, for which my Simulation 1 gives the mundane and attenuated value of correlation = 0.5.  It is no surprise that the correlation is attenuated as the vectors are fuzzy, and fuzziness indicates lack of reliability of measurement which is well-known in reliability theory to attenuate a correlation.

Of course there could be something going on in nature which is very very spooky and much more spooky than anything above, and that I have not covered in my local, real simulations.  My own preon model (http://vixra.org/abs/1511.0115) subdivides the electron into many components which does allow a glimmer of scope for non-local effects.  Also my preon model has 24 dimensions as the preons contain strings, and it is also possible that what looks like a local effect in multi-dimensions could look like a non-local effect when viewed in three dimensions.


Posted in physics | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Quantum Gravity and a new table of elementary particles

This week I have used Preon Model #7 to make a model for gravitation based on the exchange of graviton bosons.  See http://vixra.org/abs/1510.0338 for Models for Quantum Gravity, Dark Matter and Dark Energy Using the Hexark and Preon Model #7 and http://vixra.org/abs/1505.0076 for Hexark and Preon Model #6: etc.  A full report on Preon Model #7 is in draft.

First, how does the graviton fit into a table of elementary particles?  It does not easily fit into such a table without modifying the table structure.  The first change is that the photon, Z and gluon are three members of the same family.  Note that they all have zero electric charge, spin +1 or – 1 and zero weak isospin.  In my model that makes them one family.  The photon deals mainly with uncoloured particles.  The Z is designed to interact, albeit neutrally, with coloured quarks, while the gluon can alter quark colour in interactions.  That means that the photon is first generation, the Z is second generation and more complex in structure while the gluon is third generation and even more complex in colour, containing enough preons to exhibit colour and anticolour properties simultaneously.

The table of elementary particles in my model has a very simple structure of row by column, where the columns are for the generations and the rows are different families.  The graviton is a single family of bosons with zero electric charge, spin + or – 2 and weak isospin + or – 0.5.  There are at least three generations of graviton the third generation is as complex as the gluon and has colour-anticolour properties.  Just as the electron has two forms: left handed and right handed, so the graviton has two forms, one where the spin and weak isospin have the same sign or handedness as each other and another form where the signs are different.

The higgs family also has at least three generations and the third generation higgs is complex enough to have colour-anticolour.  The higgs has no electric charge, no spin and weak isospin of +0.5 or -0.5.

The dark boson family has at least a third generation member with zero properties except colour-anticolour, and that colour-anticolour property is just like that for the gluon, graviton, higgs and dark boson.  It also may be possible that the top and bottom quarks share this colour-anticolour property.  They could have colour plus colour-anticolour.  A fourth generation gluon could have colour-anticolour plus colour-anticolour.

So why is gravity always attractive?  In my model, it is not always attractive!  It is no more so than are the photon, Z and gluon taken in combination, and the types of gravitons combined are as numerous as types of QED photons, weak and strong QCD gauge bosons.  So why does gravity appear to be always attractive?  The answer lies in its weakness.  In my model, an electron repels an electron using the first generation graviton, just as an electron repels an electron via QED.  But that repulsion is too weak to be presently detectable.    The third generation colour-anticolor graviton is the most important as it attracts quarks (and gluons and higgs and dark) together gravitationally.  But why do we never see quarks repelling quarks gravitationally?  There is a parallel question: why do quarks attract quarks, as a net effect, within the atomic nucleus?   The answer to that answers the question about gravitational attraction.  the strong force is very approximately 10 to the power 40 greater than gravity.  That means that where the sphere of influence of the strong force is on the order of the diameter of the nucleus, the sphere of net attractive gravitational influence of the third generation graviton is of the order 10 to the power 40 times as big as the nucleus.  That is a sphere of attractive-only influence on a universal scale, or at least intergalactic scale.  But far enough away, the first generation gravitational influence between quarks, which is repulsive, can assume dominance. And that repulsion at a remote distance is seen as dark energy.

The dark boson of the third generation can interact gravitationally only with the third generation gravitons.  The first and second generation dark bosons (if they exist … as they have no properties we know of) cannot interact repulsively through the first generation graviton and so cannot take part in dark energy.  In my guestimation, the higgs is also a candidate for dark matter and it could take part in both attractive gravitation via the third generation graviton and also in dark energy.

I am possibly more pleased at finding a neat structure for the table of elementary particles than at finding the graviton structure.  I have always been disconcerted by three things about the Standard Model table:  (1) the higgs stuck out on its own, (2) the non-recognition that the photon, Z and gluon are three generations of one family and (3) the W lumped with the Z because tof their weak force connection.  The W in my table is a second generation boson in a separate family row.

A further modification in my model is for the way interactions are represented.  I have made it a rule in interactions that weak isospin is conserved.  This means that an electron cannot simply radiate a photon because it is accelerating.  This is basically as issue of field interaction effects versus particle interaction effects.  In my model, there is an incoming catalyst boson (the 1/4 higgs+ ) which interacts with the left-handed electron and, as a result, the electron changes handedness and emits a photon-  (Figure S in  http://vixra.org/abs/1510.0338).    For a left-handed red down quark, say the incoming calalyst boson is a Z- (see Figure C), the quark changes handedness and a 1/2 graviton- is emitted.   (Where a 1/2 graviton is a second generation graviton.)  The QED-like repulsion in this second generation gravitational interaction will be swamped by the third generation attractive QCD colour forces taking place in in other interactions.

Posted in physics | Tagged , , , , , , , , , , , , , , , | Leave a comment

Ben6993’s Hexark and Preon Model #6

I have finished a vixra paper (27pp),

Title: Hexark and Preon Model #6: the Building Blocks of Elementary Particles. Electric Charge is Determined by Hexatone and Gives a Common Link Between QED and QCD.

and it is now loaded onto the vixra website at: http://vixra.org/abs/1505.0076

Abstract: The paper shows a model for building elementary particles, including the higgs, dark matter and neutral vacuum particles, from preons and sub-preons. The preons are built from string-like hexarks each with chiral values for the fundamental properties of elementary particles. Elementary particles are unravelled and then reformed when preons disaggregate and reaggregate at particle interactions. Hexark colours are separately described by hue (hexacolour) and tone (hexatone). Hexacolour completely determines particle colour charge and hexatone completely determines particle electric charge. Hexacolour branes within the electron intertwine to form a continuously rotating triple helix structure. A higgs-like particle is implicated in fermions radiating bosons.

Warning:  Model#6 supersedes models #5 and earlier, and any of my write ups before May 2015 contain errors in the eigenvalues for weak isospin in the up quark and neutrino.  The W and Higgs have more forms in Model#6 than previously and my gluon model structure now conforms more closely to the standard model.

Amendments to Model#6 from earlier models:

My error prior to Model#6 was to assume the following incorrect eigenstates for the up quark and the neutrino:
where () = (electric charge, spin, weak isospin)
LH up = (2/3, -0.5, -0.5)
LH ν = (0, -0.5, -0.5)

whereas they should be:
LH up (2/3, -0.5, +0.5)
LH ν (0, -0.5, +0.5)

But I have now corrected this in the new Model#6. It required a fourth preon, preon D, with properties (-0.5, 0, +0.5) in order to be able to build the ν and up quark using only four preons to conform with the pattern for the first generation elementary particles.

There were two other structural effects: the higgs (0,0,-0.5) can now be built in two different ways: ABC’C’ X6 as before but also as D’C X7 (where Xn is n neutral pairs of preon + antipreon).

Also, there must be two different forms of W- : (-1,-1,-1) and (-1,+1,-1). This is because to send an LH up (2/3, -0.5, 0.5) to a RH down (-1/3, 0.5,0) requires an addition of (-1, +1, -0.5) while to send a RH up (2/3,0.5, 0) to a LH down (-1/3, -0.5, -0.5) requires an addition of (-1,-1,-0.5). That requires the two forms of W-. And the extra 0.5 weak isospin that is needed comes from a 1/4 higgs which complies with many interactions in my preon model which require the 1/4 higgs or 1/2 higgs or higgs as a participant.

I have also introduced a new term which correlates exactly with electric charge: hypertone. If the hypercolour of the preon is relabelled as white (=-1) for coloured preons and black (=1) for anticolour preons then blackness and whiteness represent tonal values for the preons. Aggregating the tonal values across preons in an elementary particle gives an exact match for electric charge in my model.

If you are drawing/painting you need to take into account both hue (colour) and tone (lightness to darkness).  Quarks and gluons already have the hue analogy (red, green, blue) incorporated as QCD, but a red up quark has + electric charge and a red down quark has – electric charge.  So there is no relationship between colour and electric charge for quarks.  At the more fundamental level of preons, the red preon is always -ve electric charge  while the antired preon has always +ve electric charge.  So the electric charge can be considered as preon tone with colour being lightness of tone and anticolour being darkness.  So the preons have both hue and tone and the hue determines QCD (colour charge) while the tone determines QED (electric charge).



Posted in physics | Tagged , , , , , , , , , , , , , , | Leave a comment

Electric charge and coloured socks

Socks can be red, green, blue, antired, antigreen or antiblue.  Every sock has a colour charge and an electric charge, as follows:

Sock name Colour charge Electric charge
Red Red (R) -1/6
Green Green (G) -1/6
Blue Blue (B) -1/6
Antired Antired (R’) +1/6
Antigreen Antigreen (G’) +1/6
Antiblue Antiblue (B’) +1/6

These socks have a clear correlation of colour charge with electric charge.  Unfortunately, unlike these socks, quarks do not have a clear dependence of negative electric charge on colour and positive electric charge on anticolour.  A red down quark has negative charge but a red up quark has positive electric charge.

But, can we build quark properties out of these socks?  Yes.  Say a red down quark and a red up quark contain the following socks:

Quark name Quark’s six socks Quark’s colour charge Quark’s electric charge
Red down (RGB)(RG’B’) RGBRG’B’ = RR(GG’)(BB’) = RR = Red1 -1/3
Red up (R’G’B’)(RG’B’) R’G’B’RG’B’ = RR’ (G’B’) (G’B’) = RR = Red2 +2/3

1 where GG’ and BB’ are both colour neutral.

2 where RR’ is colour neutral and (G’B’) is red {R’G’B’= neutral, so R’ (G’B’) is neutral, so G’B’=R}

See also       http://vixra.org/abs/1505.0076

Posted in physics | Tagged , , , , , , , , , | Leave a comment

Summary table of elementary particles in Ben6993’s Preon Model#5

No. of preon units* 4   12   20
quarks up charm top
down strange bottom
leptons Electron neutrino muon neutrino tau neutrino
electron muon tau
No. of preon units 4 8   16   32
bosons photon Z and W gluon 2-gluon
¼-higgs ½-higgs higgs 2-higgs
¼-axion ½-axion axion 2-axion

*There are 24 preons per preon unit.



Revised on 25 October 2014

Posted in physics | Tagged , , , , , , , , , , , , , , | Leave a comment

Emergent space using Rasch pairs analysis/ adaptive comparative judgment



Pseudo-random data are used to illustrate the relationship between errors in raw data being comparatively judged and the resulting Rasch pairs location parameters, first for data which are relatively homogeneous and second for data which have various amounts of heterogeneity.  For each data type, various error sizes are used.  Rasch pairs location parameters are demonstrated to be plotted on a contracted scale when the objects appear to be homogeneous.  As space is contracted near concentrations of mass, so is the Rasch scale also contracted when the objects in it are determined to be located very close to one another.


 There are two interesting feature of Rasch analysis.

The Guttman structure is not a fault with the Rasch model.  The Guttman effect would also cause problems with constructing any physical scale.

  • The Rasch output parameters are on an apparently arbitrary scale, said normally to be +3 to -3, but the scale is often different, e.g. +1 to -1.5. What controls the calculated range?

The Rasch scale is investigated using pseudo random data sets generated for this paper. Different data sets have been generated to have different amounts of true error in their locations and the Rasch output parameters were computed for each such data set to find the corresponding ranges in parameter values.

The question of what determines the range of parameters, i.e. the scale of separation of objects.


In this part, sets of data were generated and the DOS BIGSTEPS Rasch pairs analysis was run on each set of data.  (Ref.: http://www.winsteps.com/a/bigsteps.pdf, Example 13.  Bigsteps is a free DOS version of Winsteps.)

Two types of data were generated using an MS Excel spreadsheet.  In both types of data, twelve objects were compared, pair at a time, thus involving 12C2= 66 comparisons.  There had to be an element of randomness in order for the program to run and a random adjustment to the true value of zero was added to each object for each pseudo-judge in each paired comparison.  Therefore 66×2= 132 quasi-random numbers were generated for each set of data.

In the truly homogenous set of data, each of the twelve objects was set to have a true value of zero.  Four sets of data were generated with different limits to the sizes of random numbers: within -2 to +2, within -1 to +1, within -1/2 to +1/2 and within -1/8 to +1/8 added to the true value of zero.  The winner in each paired comparison was deemed to be the larger of the pair of random numbers.


Table 1:  Rasch pairs results for truly homogeneous objects

Data set +/- 2 Data set +/- 1 Data set +/- 1/2 Data set +/- 1/8
Object no. Location parameter Object no. Location parameter Object no. Location parameter Object no. Location parameter
  3  0.91   7  0.97   1  0.56   2  0.56
  4  0.52 11  0.97   5  0.56   4  0.56
  8  0.52   1  0.57   7  0.56   5  0.56
  2  0.17 10  0.57 11  0.56 10  0.56
  6  0.17   3  0.19   6  0.20   8  0.20
  9 -0.18   9  0.19   8  0.20   9  0.20
10 -0.18   2 -0.18 10  0.20 11  0.20
11 -0.18   5 -0.18   2 -0.16   1 -0.16
12 -0.18   4 -0.55   3 -0.16   7 -0.16
  1 -0.53   6 -0.55   4 -0.16 12 -0.16
  5 -0.53 12 -0.55   9 -0.94   3 -0.94
  7 -0.53   8 -1.45 12 -1.42   6 -1.42
Range of Location parameters 1.44 2.42 1.98 1.98
Ave. error per object 0.45 0.47 0.46 0.46
S.D. of object errors 0.01 0.03 0.03 0.03


Table 2:  Intercorrelations between order of merit of location parameters for truly homogeneous objects

Data set +/ -2 +/ -1 +/ -1/2
+/- 1 -0.490
+/- 1/2 -0.049 0.224
+/- 1/8 -0.154 -0.196 0.420

The correlation coefficients for the orders of merit of objects in the four Rasch analyses vary from -0.49 to +0.42, with median value -0.05.  These low values indicate that a lack of association can be expected when the objects are truly equal but with some random element added at every place in the 66 paired comparisons.

The four ranges of Rasch parameter location values are: 1.44, 2.42, 1.98 and 1.98 with median value 1.98.  (Where, for example, 1.98 = 0.56 –  -1.42 .)

This pattern of Rasch analysis results is what is desirable in a test of comparability of twelve scripts chosen say from two different tests intended to be equivalent or interchangeable.  If replications were undertaken, one would hope for different results each time as a sign of homogeneity of objects.

The second type of data is for truly heterogeneous objects.  True location values of objects were allocated as 1, 2, …, 12,  for the twelve objects.  Adjustments were made to the true values as for the homogeneous data.  Random values were added of six different size limits: within +/-2, within +/-4, within +/-6, within +/-8, within +/-10 and within +/-12.  As the size limit of the random adjustment decreases, the data should look more heterogeneous, i.e. should look less and less like the first data set in Tables 1 and 2.  Five replications of data were made for each size of adjustment (except +/-2). There are too many results to show in as much detail as in Tables 1 and 2, instead summaries of results are shown in Table 3.


Table 3   Summary of results for truly heterogeneous objects.

Data set+/- 12



Range of parameters location Ave. error per object S.D. of object errors Correlation with true location(12 objects)
1 1.45  (0.54 to -0.91) 0.45 0.02 0.594
2 2.56  (1.53 to -1.03) 0.50 0.03 0.730
3 4.09  (1.67 to -2.42) 0.54 0.08 0.767
4 2.39  (1.43 to -0.96) 0.47 0.03 0.589
5 3.24  (2.21 to -1.03) 0.50 0.08 0.791
Median 2.56 0.50 0.03 0.730


Data set+/- 10



Range of parameters location Ave. error per object S.D. of object errors Correlation with true location
1 2.56  (1.03 to -1.53) 0.50 0.03 0.812   (12)
2 3.00  (1.50 to -1.50) 0.49 0.04 0.756   (12)
3 2.94  (1.47 to -1.47) 0.48 0.04 0.653   (12)
4 2.74  (1.11 to -1.63) 0.52 0.04 0.699   (12)
5 2.98  (0.89 to -2.09) 0.52 0.07 0.709   (11)
Median 2.94 0.50 0.04 0.709   (12)


Data set+/- 8



Range of parameters location Ave. error per object S.D. of object errors Correlation with true location
1 8.09  (4.67 to -3.42) 0.72 0.17 0.832   (11)
2 3.14  (1.57 to -1.57) 0.51 0.04 0.905   (12)
3 4.09  (1.67 to -2.42) 0.55 0.08 0.737   (12)
4 3.70  (2.21 to -1.49) 0.55 0.07 0.703   (11)
5 2.53  (1.51 to -1.02) 0.49 0.04 0.777   (12)
Median 3.70 0.55 0.07 0.777   (12)


Data set+/- 6



Range of parameters location Ave. error per object S.D. of object errors Correlation with true location
1 4.32  (2.16 to -2.16) 0.61 0.09 0.756   (10)
2 4.72  (1.93 to -2.79) 0.66 0.09 0.828   (11)
3 3.71  (1.86 to -1.85) 0.58 0.03 0.884   (12)
4 5.14  (2.58 to -2.56) 0.58 0.09 0.835   (12)
5 3.51  (1.75 to -1.76) 0.56 0.04 0.870   (12)
Median 4.32 0.58 0.09 0.835   (12)


Data set+/- 4



Range of parameters location Ave. error per object S.D. of object errors Correlation with true location
1 5.34  (2.60 to -2.74) 0.66 0.11 0.913   (11)
2 7.24  (3.65 to -3.59) 0.75 0.09 0.958   (12)
3 5.46  (2.73 to -2.73) 0.54 0.08 0.798( 9)
4 5.50  (2.75 to -2.75) 0.68 0.12 0.893   (11)
5 5.79  (2.89 to -2.88) 0.64 0.11 0.873   (12)
Median 5.50 0.66 0.11 0.873   (11)


Data set+/- 2


Range of parameters * location Ave. error per object S.D. of object errors Correlation with true location
1 1.10  (0.55 to -0.55) 0.45 0.02 n.a.    (4)

*Based on only four objects as results for eight objects failed to converge due to a Guttman pattern of data.

Replications were thought to be unnecessary.

 Heterogeneous objects with an inbuilt randomness of +/- 12 are those, in this second type of data, which are most like homogenous objects.  As the size of random adjustment decreases to +/- 2 the data act more like the heterogeneous data that their true unadjusted values are.  By the time the adjustment is only +/- 2, the Rasch pairs analysis cannot cope with eight of the objects and only four objects have converged location parameters.   The best sign of heterogeneous data is a failure of the program to produce results.  This is because much of the data are in a Guttman pattern and a Guttman pattern indicates non-locality of objects.

As the random element gradually decreases, the correlation of location parameters with the true locations tends to increase:  0.730 –>  0.709  –>  0.777 –>   0.835  –>  0.873  (median values in Table 3).  At the same time, the range of location parameters also markedly increases:  2.56  –>  2.94  –>  3.70  –>  4.32  –>  5.50 (median values in Table 3).  Thus, an indicator of the homogeneity of objects is the compressed range of location parameters.  The Rasch analysis assesses that objects are relatively homogeneous and compresses the mark scale for them.  Thus the Rasch analysis is maybe acting rather like the metric of physical space near a concentration of mass.  The average error in the location parameter is calculated by BIGSTEPS and this value gradually rises from 0.50 to 0.66 as the data become more heterogeneous in nature and the location parameter sizes increase.  The standard deviation of the location parameters also rises from 0.03 to 0.11, at the same time.



This paper shows that a Rasch analysis compresses its location parameter space according to the level of uncertainty in making judgements within that space.  The more uncertain the judgements, the more compressed are the points on the scale.




Andrich, D.   Relationships Between the Thurstone and Rasch Approaches to Item Scaling,  Applied Psychological Measurement,Vol. 2, No. 3, Summer 1978, pp419-460.

Pollitt, A. and Crisp, V.  Could Comparative Judgements of Script Quality Replace Traditional Marking and Improve the Validity of Examination Questions?   Paper presented at the British Educational Research Association Annual Conference, 2004.




20 May 2011

(Version 2.0, 20 May 2011)

(Version 3.0, 30 August 2014)

(Version 3.1, 20 September 2014)




Posted in physics | Tagged , , , , , , , | Leave a comment