EDITED to clarify:
In terms of performance (though that's still a wild term, I know), which is better - loading a local version, or a CDN version of jQuery, over RequireJS?
For the record, RequireJS online doc contains some passage that seems to discourage CDN using, though I am not really sure 100% about what it means:
Do not mix CDN loading with shim config in a build. Example scenario:
you load jQuery from the CDN but use the shim config to load something
like the stock version of Backbone that depends on jQuery. When you do
the build, be sure to inline jQuery in the built file and do not load
it from the CDN. Otherwise, Backbone will be inlined in the built file
and it will execute before the CDN-loaded jQuery will load. This is
because the shim config just delays loading of the files until
dependencies are loaded, but does not do any auto-wrapping of define.
After a build, the dependencies are already inlined, the shim config
cannot delay execution of the non-define()'d code until later.
define()'d modules do work with CDN loaded code after a build because
they properly wrap their source in define factory function that will
not execute until dependencies are loaded. So the lesson: shim config
is a stop-gap measure for for non-modular code, legacy code.
define()'d modules are better.
Theoratically, using a CDN jQuery file would result in 1 more HTTP Request (can't be merged with other JS files using r.js), but has the potential benefit that your visitors may have already cached the CDN version from other sites they've visited.
However, if I got it right from the information googled, you still need to offer a local jQuery copy to r.js, as the resulting minified JS file would still need to contain a copy of the jQuery module to ensure the consistence of dependency. That would result in loading jQuery both over local and CDN. (Hope I got this part right?)
So, which way is better?
Your requirejs doc quote is specifically about using scripts that have a shim config for jQuery. Dynamically loading of a base dependency from a 3rd party CDN is fine if all the scripts are AMD modules.
Cache hits are not has high as you might think (Yahoo I believe did a study on cache vs non-cached state), and it means now that you now have to rely on another domain for loading.
The benefits probably depend on the app, profiling it will lead to the best answer. For instance, if it is a site with lots of images, then the strategy for jquery matters less as the image loading will probably be the more noticeable perf issue.
I would start out with optimizing jQuery into the built file and using AMD modules for everything, so if I want to delegate to the CDN I can. However, if using requirejs and the shim config, the base dependencies need to be inlined in the built file because the shimmed libraries do not call define() -- they do not wait for dependencies to load, they want them available immediately.
Short answer: Avoid the extra HTTP request and DNS lookup
You're most likely better off using your own copy and letting RequireJS merge the files. In other words, I'd say it's more valuable to avoid that extra http request and DNS lookup.
While it's true that a user may already have that file in their cache from another site, they most likely will not. Even if they had been to another site recently, cache sizes are generally small enough that during the course of a normal browsing session or two, a user can easily fill up their cache, in which case older files will be discarded.
I think you'd only really be talking about 1% of your traffic, at most, that have CDN file in cache already, so only 1% of your users are benefiting. However, by combining those resources and avoiding the extra http request, you're benefiting 99% of your users. So conversely, you'd be hurting 99% of you're users by not combining. Just another way of looking at this.
Another consideration is mobile users ... mobile users have terrible latency so the RTT for the additional http request and dns lookup have a larger cost.
It is not only the fact that people have cached the file. user agents can only load a couple of files from the same domain at the same time. So loading the JS file from a CDN makes sure the file will get loaded simultaneous.
This come next to he benifit of users already having a cached version of the file. So for popular files (e.g. the jQuery javascript) I would always load it from a CDN.
You could always add a fallback to a local version in case the CDN is down for whatever reason.
Note
Although the RFC states that user agents should do a maximum number of 2 requests simultaneous most user-agents ignore this spec nowadays. Also see this old (2009) question on SO. Note that it wouldn't surprise me that user-agents currently do even more requests.