Feeds:
Posts
Comments

Archive for February, 2011

I am honored

Today I have received an email by somebody asking me to write a book chapter.

I am contacting you regarding a new InTech book project under the working title “Milk Production”…

You are invited to participate in this book project based on your paper “Use of individual cow milk recording data at the start of lactation to predict the calving to conception interval”, your publishing history and the quality of your research. However, we are not asking you to republish your work, but we would like you to prepare a new paper on one of the topics this book project covers.

Publication of the book is scheduled for 27 October, 2011. It will be abstracted and indexed in major databases and search engines. The book will also be available online and you will receive a hard copy via express delivery service.

Why should you participate?
– “Milk Production” covers your area of research
– Free online availability increases your paper’s impact
– Each InTech book chapter is downloaded approximately 1000 times per month
– More citations of your work
– You keep the copyright to your work

I have become used to be asked to write book chapters! So what’s the catch this time? Maybe that:

A publishing fee of 590 euro covers the cost of the scientific publishing work flow, print and distribution of one hard copy of the book for the corresponding author, server and repository costs.

Out of interest, I have checked the table of contents of Stochastic Control and counted 30 chapters and over 60 authors. It probably does not matter whether the book is ever read as long as you find enough people to write it.

Read Full Post »

WinBUGS is renowned for its cryptic error messages. Recently, while trying to fit a logistic model, I have been getting the message ‘Trap 66 (postcondition violated)’. There is not much on the web on the possible causes for this. There are however a few mentions such as on page 10 of this document and on page 251 of this document. In both cases, variance priors are identified as the culprits. Indeed, in my case changing the prior on the random effects standard deviation from dunif(0, 100) to dunif(0, 10) solved my problem.

PS: This post seems to attract some traffic probably because it is a common problem with no clearly identified cause. It might help if people who have encountered this problem could leave a short description and how they solved it as a comment.

Read Full Post »

A recent article by Heringstad and Larsgard looks at fertility in 2 selection groups in Norway. In one group, cows were mated with the best sires for protein yield while in the other group sires were chosen based on their breeding value for mastitis. Fertility as measured by non return at 56 days and calving intervals was clearly better in the group selected for lower mastitis. The authors attribute the poorer performance in the protein yield selection group to the correlation of the trait with milk yield. Looking at my previous post on protein yield, this would appear to make sense.

Read Full Post »

With our work on milk constituents and reproduction we found that the fat to protein ratio might not be a very good predictor of reproduction after all and that the percentage of protein would be better, although I am not sure as to how good it would actually be on its own. I have been contacted by several people asking about protein yield. I have written what I think here.

In the same time, I have looked a bit more into this in terms of the distribution of milk yield and protein percentage around the lactation peak. What I have done is to select all the milk recordings between 50 and 60 days in milk in multiparous cows from my national database. This was to limit the variation due to lactation stage. Primiparous cows are clearly different in terms of milk production and were not included. I had 205,000 recordings available. I then created a grid of milk yields between 15 and 55 kg and percentages of protein between 2.5 and 3.75 % and counted the number of recordings in each cell. So for example, I had a cell with the number of cows producing between 15 and 15.5 kg of milk and with a percentage of protein between 2.5 and 2.6. The data were smoothed with loess as explained previously. This is shown in the figure below.

The darker the shade of grey, the higher the number of cows. The purple lines represent protein yield. The most common combination milk yield/percentage of protein was a production of around 40 kg with 3.00 % of protein. It seems that there was no variation in the minimum percentage of protein with milk yield but that the distribution for the percentage of protein narrowed down as milk yield increased. Maybe the most interesting point to notice in all this is how much the weight of protein produced depends on milk yield and less much so on the percentage of protein. One way to see this is to take let’s say a production of 40 kg of milk. Between 2.5 and 3.75 % of protein, the difference in protein yield is 0.5 kg. Now, if you take a milk with 3 % of protein, the difference in protein yield between 20 and 50kg of milk is 0.9 kg. So, looking at protein yield gives much more weight to milk yield than to the percentage of protein.

Read Full Post »

In order to uncover relationships between variables without having to resort to complicated models, it can be interesting to smooth your data. Several techniques are available from the very simple moving averages to the more complicated generalized additive models. When I have to this, I have come to like local regression as implemented in the R loess function.

To illustrate how it works, I generate some data from a non linear function for the mean and from a normal distribution for the random noise.

set.seed(824)
f1 <- function(x){

  y <- x + -2 * x^2 + 1.5 * x^3 +
        rnorm(1, mean = 0, sd = .03)

 }

myDat <- data.frame(
          x = seq(0, 1, by = .01),
          y = rep(NA))

myDat$y <- sapply(myDat$x, f1)

The data are shown in the figure below:

Using loess is really simple. The syntax is the same as for other models. The degree of smoothness is controlled by the span parameter of the function. By using predict either on the original data or a vector (or grid) of generated data, it is possible to obtain a smoothed curve. The following code shows how to use loess on our data. Four values are tested for span and smoothed curves are plotted against the original data.

limy <- c(0, .6)
spns <- c(.05, .1, .75, 2)

par(mfrow = c(2, 2))

# Loop over the 4 span values
for(i in 1:4){

# Loess model
myMod <- loess(y ~ x, data = myDat, span = spns[i])

# The response is predicted back
myDat$pred <- predict(myMod, data = myDat)

# Plots of smoothed data
plot(y ~ x, data = myDat, pch = 20,
       main = paste("Span =", spns[i]),
       ylim = limy)
par(new = TRUE)
plot(pred ~ x, data = myDat,
      type = "l", col = "red", lwd = 2,
      ylim = limy, ylab = "")
}

The figure below shows the different degrees of smoothness obtained for different values of span.

Read Full Post »