Garmin Edge Touring can't do car, walking, or straight-line routing directions?

I previously owned an Edge 605, and recently replaced it with an Edge Touring. One key feature which I can’t figure out on the Edge Touring is Car or Pedestrian routing. On the 605 you could choose “Car/Motorcycle,” “Cycling,” or “Pedestrian” routing. The Edge Touring however only seems to offer “Cycling,” “Tour Cycling,” or “Mountain Biking.”

This is a serious issue for me because I use the device in mixed environments away from home, such as a trip where I do walking, cycling, and driving on a single trip. Whereas the best road cycling directions on the Edge 605 were actually given by the Car mode with Highways disabled, I can’t figure out how to get decent driving directions on the Edge Touring. Is it even possible? If it’s not possible on the Edge Touring, what about on the Edge 800, 810, or 1000? This information doesn’t appear to be listed on Garmin’s own site, nor can I discern it from the various product reviews online.

Similarly I miss the “Straight Line” style navigation from the Edge 605–sometimes there is no known route but you still want the distance-to-destination and direction pointing to work; I have also failed to figure out how to do this on the Edge Touring.

If it helps, I am using the City Navigator maps for Southeast Asia…the map data is all there, just the routing capabilities seem lacking on the Touring model. I also used some OpenStreetMap files in Asia which worked OK but again the device only offers bicycle routing whereas the old models offered car routing with the same maps.

Shell pattern matching and arithmetic operators (+ – * / %)

This looks so simple but it isn’t:

[[ "1234+5678" =~ [0-9]+(s*(-|*)s*[0-9]+)* ]] && echo $?

returns a 0. However, it actually should not do that, as only minus (-) and multiplication (*) operator are allowed.
Also, I grabbed some regex tool in the net and tried to match this pattern: result was the null string. (as expected)

In prose, this extended regex reads:

  • look for number (mandatory)
  • check if there is some white space
  • operator must be either a - or a *
  • check if there is some white space (again)
  • look for another number (must be there if preceded by an operator)

Also, the asterisk following the expression in parentheses says that 2nd to nth operator-number pair is optional.

Where is my mistake in thinking here?

Hyperlinks returning 404's

I’m very new to Cognito Forms and I’m setting up my first form. I’m wanting to create a link to To do so, I create a Content field and used the Insert/Edit Link button. These are the settings for it:


After all that is done, I went to a live version of the form and clicked the link to make sure it is working. However, when clicking the link, it redirects me to instead of just How do I remove the Cognito Forms portion of the URL? Is it possible?

Cohen's kappa for individual variables

I have a dataset of data points. For each point, two different raters assigned exactly one of four possible categories. I’m familiar with how to compute Cohen’s kappa for the entire dataset, however, it was suggested to me that I compute inter-rater reliability for each possible category.

Is this possible? If so, how do you calculate it either a) mathematically, or b) via JMP?

What's an intuitive way to explain the different types of validity?

Specifically I’m thinking of a simplified division whereby validity is divided into:

  1. Construct validity
    1a. Convergent validity
    1b. Discriminant validity

  2. Criterion related validity
    2a. Predictive validity
    2b. Concurrent validity

This division leaves out some common concepts (e.g. face validity, other types of criterion validity), but it’s for undergraduates taking their first course in statistics. I’m required to teach using this division.

I’m looking for examples, mnemonics, diagrams, and anything else that might help me explain the division in a memorable and intuitive way.

One thing I’m particularly struggling with is a clear way to explain the difference between concurrent validity and convergent validity, which in my experience are concepts that students often mix up.

Significance testing for Feature selection

Feature Selection: Is there a statistical method to identify whether a feature is relevant or not in case of classification problem?

I’m reading about filter models, but as per my understanding, performance criteria like Fisher score or Information Gain can be used to rank order the features

But I’m more interested in defining a threshold using some criteria for feature selection (something like p-value) so that the method can be implemented agnostic of underlying data

p.s : I have continuous, ordinal & nominal features

How can a random vector be nonsingular?

I appreciate the help I have been getting on this site today. I had help on a proof that $text{Cov}left(mathbf{Y}right)$ is nonnegative definite for any random vector $mathbf{Y}$. According to the textbook I have, it follows from this statement that $mathbf{Y}$ is nonsingular if and only if $text{Cov}left(mathbf{Y}right)$ is positive definite.

Now this doesn’t make sense to me, since $mathbf{Y} in mathbb{R}^n$, and I thought that only square matrices were nonsingular? Also, I know that $text{Cov}left(mathbf{Y}right)$ is positive definite if and only if there exists a nonsingular (square matrix) $Q$ such that $$text{Cov}left(mathbf{Y}right) = QQ^{prime}text{.}$$

So are they perhaps actually saying that $text{Cov}left(mathbf{Y}right) = mathbf{Y}mathbf{Y}^{prime}$, where $mathbf{Y}$ is a nonsingular square matrix rather than vector?

Gradient for hinge loss multiclass

I am little confused when trying to find the gradient for the multiclass hinge loss:

$l(y) = max( 0, 1 + underset{r neq y_i}{ text{max} } W_r cdot x_i + W_{y_i} cdot x_i)$

Where $W^{k times n}$ is the matrix holding in each row the corresponding classifier of each class.

Unless my math is wrong, the gradient of the function is:
frac{partial l}{partial w} = begin{cases}
0, & W_{y_i}cdot x_i > 1 + W_r cdot x_i \
0 + 0 – x_i, & text{otherwhise}

Is this ok?

So if I would like to find the $W$ which minimizes the function using the stochastic gradient descend I would need to do:
W_y^{(t+1)} = W_y^{(t)} – eta x_i

With $eta$ the learning rate.

Is this a valid procedure to optimize the $l(y)$ function?

Question in conditional probability

Let $X$ and $Y$ be two independent random variables of discrete type with the p.m.f. $f_1(x)=,_{n_1}!C_x,p^x(1-p)^{n_1-x}$, $x=0,1,…,n_1$ and $f_2(x)=,_{n_2}!C_y,p^y(1-p)^{n_2-y}$, $y=0,1,…,n_2$ respectively, where $n_1,:n_2$ are positive integers.

(a) Find the distribution of Z =X+Y <- i got this.

(b) For $zleq min {n_1,n_2}$, find the conditional distribution of $X$ given $Z=z$. <- i’m stuck with this!

I have difficulty finding the probability of $Z=z$ since $z$ is not a specific value but a minimum of two numbers. I also have no idea how to find the probability of $Xcap Z$ needed to find the conditional probability for the same reason above…

[Note: $_n !C_requiv {{n} choose {r}}$]