An ordered logit model is given; 
Consider the following log-likelihood function of the logit model:

Also given is that k = {0,1,2,3,4}, where for k = 0 & k = 4 we have alpha being -Inf and +Inf respectively. Now my question is how can I code a function that optimizes the log-likelihood function, that is find the MLE of B = B1 and alpha for k = {1,2,3}.
Here is my try;
J <- length(Poss)
log_likelihood <- function(params, Poss = Poss){
beta <- params[1]
alpha <- c(-Inf, params[2:4], +Inf)
for (k in 2:4) {
for (j in J) {
delta[k] <- alpha[k] - Poss[j] * beta
ll <- delta[k] - log(1 + exp(delta[k]) - delta[k] + log(1 + exp(delta[k])))
return(ll)
}
}
return(sum(ll))
}
guess_optim <- c(0,0,0,0) #for beta1 and alpha1,2,3
optim(guess_optim, log_likelihood, Poss = Poss)
Only this does not work, how can I fix this issue?

I think the very first issue is that you
return(ll)inside the inner loop, which will result in the very first value of the very first record -- your procedure does not iterate until it reachesreturn(sum(ll)). The second issue is that observations should only contribute to the likelihood for the category they're actually in, you don't make that distinction. Finally,optimminimizes, so you shouldn't return the value you want to maximize.To test the implementation, let's obtain some example data first (from here):
Now, here's an implementation that will get you roughly the same parameter estimates:
Some notes:
Ybeing numeric + sequential starting from 1 and will only handle a single numericXpredictor, a proper fitting routine should allow much more flexibility (e.g. converting the levels ofYto this ordered sequence internally).