I wish they would dedicate more time to improving cl.exe and link.exe in terms of performance and language support and less time with these hokey team visualization gimics.
Every release since 2008 has been getting slower and slower for C++, the C compiler is awful, PGO instrumentation in Win8 is less capable, and there's no equivalents in the Windows ecosystem to gcc's likely/unlikely, oprofile, or valgrind.
It's at the point now where developing on Windows vs Linux is a serious performance and security impediment due to their withering native toolchain. But I guess nobody over there gets promoted for fixing the hard stuff.
Current VS-compiler dev here, the backend codegen team to be more precise (I actually own PGO). I wish this comment had been written in few days/weeks/months so I answer directly and talk specifics about some of the work that went into VS2013, but for now I just want you to know we've aware of all of the issues you brought up, and have either worked on or plan to work on many of them.
RE: likely/unlikely, VS has __assume(0), which isn't exactly the same thing I know, but it is something and does help. I'm actually in favor of us doing more with static annotations to bring PGO style optimizations to non-PGO builds. If you feel the same way please be louder about it, but realize there is a vocal group of people who consider static annotations harmful (and they have a large body of evidence in __forceinline backing them up).
oprofile: There is ETW/xperf, and of course a variety of instrumented profilers (both shipping and internal)
Although I do wish my team was larger, and it doesn't get all the love that some of the more flashing UI stuff does, I wouldn't go as far as to say the toolchain is withering. Some of the smartest people I know are working on my team with me on these problems.
I work on an extremely latency sensitive application. The choice of Windows predates me, and frankly, it was a massive mistake. I am never working on Windows again after this job.
Here's some feedback for low latency development in Visual Studio. Try not to take it personally, I don't hate you, I just hate every MSVC I've ever used:
- With a profiler, we typically see only a few hundred samples in our simulation runs, the rest, between 99.999% and 99.99999% of the samples are in WaitForSingleObject. PGO compiles only 0.4% of our application for speed, and our response times are about 20 usecs slower with it on.
- RE xperf (and WinDbg): Stop bundling this shit in "Toolkits". The installers/downloaders are buggy as fuck and break the main VS2012 installer; I don't want to run a bunch of random msi files on our prod server core box; and the download pages are a maze of redirects.
- __assume is so useless. How often does someone write a branch that does nothing every time? We need manual size/speed optimization.
- PogoAutoSweep crashes threaded programs if you don't suspend every other thread but it's still quasi documented. The PogoSafeMode build flag/environment appears to be ignored.
- The filename postfix that PogoAutoSweep adds breaks the VS2012 PGO menu options.
- The VS2012 PGO instrumented/optimized menu items overwrite the target exe. So when you realize somethings wrong in the environment or click something by mistake and didn't manually reshuffle the build dir, you have to rebuild everything. Name the target .instrument.exe or put it in another or something please.
- There's nothing one can do to limit the VS2012 profiler to specific threads. I was able to write hooks to target threads in VerySleepy in an afternoon, but somehow this feature escapes MS.
- The interface for instrumenting specific functions is terrible, use a plain text file or decl_spec FFS.
- If there are #defines or other ways to detect an instrumented build, they're terribly documented.
- PGO instrumentation/optimization is woefully obtuse. What did it pick for speed? Why did it pick it? What branches did it fold/unfold? How does the pgc weighting actually work? Can I artificially create my own pgc?
A perl script that compares the offsets in objdump will give me more information than most of the MSDN articles about this shit.
- Not related to our main response loop, but we can see in our logging threads that the LFH malloc appears to often call RtlAnsiStringToUnicodestring. Seriously, what the fuck?
- Speaking of which, changing the malloc implementation is still horrible even after the VS2010 msvcrt changes. In linux, you can change LD_PRELOAD and try out tcmalloc or the Intel tbb allocator in about 3 minutes. In Visual Studio, prepare to spend a few hours getting a reasonably large project to build with these.
- Why is there SemaphoreSlim in C# but not C++? Why is there no Benaphore primitive that can also be used in WaitForMultipleObjects?
- Serious issues in Microsoft Developer Connect are often ignored, closed as behaves as expected, or dismissed off hand. For example, I was tearing my hair out over this one, and the resolution is truly outrageous: http://connect.microsoft.com/VisualStudio/feedback/details/7...
I am certain that this comment on Hacker News will make a bigger impact than anything I've ever seen on Microsoft Connect.
- RE Instrumenting profilers/bounds checkers: Any project that is reasonably large and has multiple configs/3rd party libraries is bad enough to manage in vanilla Visual Studio that instrumenting it with some other 3rd party plugin becomes a serious time sink.
- There is still no valgrind/cachegrind equivalent that provides the same level of detail. The closest thing is either Intel Pin or Rational Purify/Quantify and they are expensive and poor substitutes. Microsoft is the only company that can see and modify the source of the kernel, runtime, linker, and machine code generation, so I don't know who else they expect to write this for them.
- Our statically linked application takes 20 minutes link and the link is not parallel. C++ compiles are likewise brutally slow. We resort to developing in VS2008 and compiling release stuff in VS2012. And no, I'm not going to turn on precompiled headers, MSVC builds incorrect binaries about 5% of the time as it is.
- Concerning precompiled headers, sharing a single pch file across projects or strictly controlling a single vcproj/vcxproj with the compiled unit is de facto impossible.
On a related topic - may I seriously ask how we as a community of technologists can help change the tone directed at our brothers and sisters in code? It's no wonder more people from Microsoft don't bother engaging when the first (so far only) reply is a hostile tirade about random collection of complaints that have nothing to do with the person being replied to.
While you make a valid point about the tone (this specific area has wasted the last 2 weeks of my life), I think you're being disingenuous regarding the " random collection of complaints that have nothing to do with the person being replied to" comment.
This person works on PGO and code generation and wanted feedback. Everything I said related to PGO, instrumentation, or parts of the CRT that relate to that (and one about their broken feedback system).
I'll be perfectly honest, when I was working on the MS compiler team (for CE), I just sort of got used to it.
But yes, thick skin is required for those who self identify!
Then again I have always held the view that if my users are unhappy, it is a personal failing on the part of my team and myself. (Although I am low enough on the software engineering totem poll that I can't really do anything outside of ensure components I create are as user friendly and high quality as possible!)
I also have this gripe - why every version of Visual Studio has to change so much drastically the CRT version, thus requiring new DLL and breaking all plugins that happen to be linking to some other version.
This basically means, if you are using (say) Autodesk products, you have to compile all your plugins with exactly the same version the main product (exe) was compiled.
This is absolute mess. Talk about 7 different MSVCRxx.DLL and MSVCPxx.DLL in one process.
Why can't you think of scheme where backward compatibility works for the CRT/C++RT too?
Oh, and on naming things - please stick with VS2012 -> CRT2012 if possible.
Also why ShortName / PlatformName in VisualStudio is named so crazy - one amd64/x64 vs Win32/x86 - and then SDK's coming from Microsoft would place in the lib/ folder things sometimes with lib/$(ShortName) and sometimes with lib/$(PlatformName).
I know why - because these things are not important. Just hard-code it in your .vcxproj and live a happier life :) Automation begone!
> PGO compiles only 0.4% of our application for speed, and our response times are about 20 usecs slower with it on
This is a complex issue, but consider abandoning PGO and just compiling for for speed then. PGO doesn't help in each and every case.
> RE xperf (and WinDbg): Stop bundling this shit in "Toolkits".
How else would you bundle it? It's not simple to put something as part of the base OS image. And I haven't heard about these installers breaking the VS installer - that sounds like a bad bug.
> __assume is so useless. How often does someone write a branch that does nothing every time?
It's commonly used as a retail version of a debug ASSERT macro. But yes, like I said earlier - I wish we would do more with static annotations, but I've gotten push back.
> PogoAutoSweep crashes threaded programs if you don't suspend every other thread but it's still quasi documented. The PogoSafeMode build flag/environment appears to be ignored.
I've never seen PogoAutoSweep crash - do you have a repro? PogoSafeMode doesn't affect PogoAutoSweep, only probe generation.
> The filename postfix that PogoAutoSweep adds breaks the VS2012 PGO menu options.
Haven't heard of this either, but stay tuned. I don't like the PGO menu options as they currently stand.
> There's nothing one can do to limit the VS2012 profiler to specific threads.
I can forward that request to the profiler team.
> The interface for instrumenting specific functions is terrible, use a plain text file or decl_spec FFS.
Are you talking about PGI or an instrumented profiler?
> If there are #defines or other ways to detect an instrumented build, they're terribly documented.
There isn't an easy way, and having different code in the PGI build versus the PGU build would be problematic.
> PGO instrumentation/optimization is woefully obtuse. What did it pick for speed? Why did it pick it? What branches did it fold/unfold?
Stay tuned
> How does the pgc weighting actually work?
The obvious way, the counts are multiplied by the provided factor before being merged in the PGD.
> Can I artificially create my own pgc?
Not realistically.
> Not related to our main response loop, but we can see in our logging threads that the LFH malloc appears to often call RtlAnsiStringToUnicodestring.
No idea (CRT owns malloc, Windows owns LFH).
> Speaking of which, changing the malloc implementation is still horrible even after the VS2010 msvcrt changes. In linux...
I'm not an expert, but my understanding was that malloc and friends were weak symbols, and if you just linked in an obj that defined malloc it would be selected as the "real" malloc without giving an ODR.
> Why is there SemaphoreSlim in C# but not C++? Why is there no Benaphore primitive that can also be used in WaitForMultipleObjects?
I'm not sure, Windows owns this.
> Serious issues in Microsoft Developer Connect are often ignored, closed as behaves as expected, or dismissed off hand
I've heard complaints about MSConnect before as well. All I can say is that it is the correct place to file bugs; and the issues there do directly show up in our bug list (someone goes through connect issues, filters/combines them, and files bugs).
> There is still no valgrind/cachegrind equivalent that provides the same level of detail
That is correct. Sorry.
> Our statically linked application takes 20 minutes link and the link is not parallel. C++ compiles are likewise brutally slow
Link.exe performance is at the top of our minds right now, you're not the only one to bring it up. VS 2013 will have some perf improvements across the FE (to help with C++ being brutally slow) but there is always more to do.
> And no, I'm not going to turn on precompiled headers, MSVC builds incorrect binaries about 5% of the time as it is.
Never heard that before - codegen bugs are always deadly serious and treated with high priority. If you have a repro, please share it.
Despite having such massive resources why does the Microsoft C++ compiler frontend suck so much when it comes to standards conformance? You guys are always consistently last when it comes to that.
It is never an either-or. It is always a complex mix of what customers ask for, what the strategic priorities/market realities are.
Often the problem with these queries is that there are not enough devs complaining to MSFT. No PM/engg manager is going to ignore a bug/problem if it shows up high in customer requests.
On promotions - I think it's the reverse problem. People only get promoted for working on something that's perceived to be hard.
> Often the problem with these queries is that there are not enough devs complaining to MSFT. No PM/engg manager is going to ignore a bug/problem if it shows up high in customer requests.
Note that you switch between "devs" and "customer" there. I argue that the real problem is that for VS those demographics are different. Devs in "Microsoft shops" don't go out and decide on a complier, they use VS because that's all there is. And the decision to purchase that compiler is made for them by the suited management class who frankly don't care about C++11 support beyond what they see in a line item checklist.
But you're right, that clearly the "devs" have preferences. The problem with your mechanism is that they don't express them to you in customer service requests. They just job-hop to another environment, doing Javascript work, or Linux, or Ruby. And you never hear from them.
Basically, the way feedback works in this world is with feet, not bug reports. If you build it they will come. If you don't they will leave.
I understood Microsoft shops as Windows ISVs, if he meant companies 100% committed to Microsoft tooling then you are right and I made an hasty comment.
For every Windows ISV there are a hundred companies where the software development toolchain is chosen by someone with nothing more than a thin grasp of the concepts involved.
They read in magazines the new shiny IDE has better team support and integrates seamlessly with Exchange and the SharePoint intranet deployed last year.
But let's be realistic here. Most of those shops will never hire someone to write C++ code. In all likelihood, they are still porting VB3 apps to VB.net.
Or they get someone like us that moves code from C++ to Java and .NET, because C++ is legacy.
The last time I managed to do a full greenfield C++ project at work was around 2005.
The enterprise has moved away from it long time ago and incoherence talk of Microsoft about going native does not help.
If they are serious about that I would expect proper C++11 support and improve ngen to the point I could use it as a real native code compiler. Not dumb UI changes.
>But you're right, that clearly the "devs" have preferences. The problem with your mechanism is that they don't express them to you in customer service requests. They just job-hop to another environment, doing Javascript work, or Linux, or Ruby. And you never hear from them.
>Basically, the way feedback works in this world is with feet, not bug reports. If you build it they will come. If you don't they will leave.
Is there any quantifiable metric to measure this so called exodus beyond anecdotes and the ".NET is dying" posts on here?
If there was an IDE for .NET on par with VS, it will definitely see uptake regardless of suits.
Well, just to address the first point there is undeniably an "exodus" from Visual Studio. Just look at all the code shipped to run in browsers, or iOS, or Android, or on node or rails. A decade and a half ago a far (far!) greater fraction of that was spend in a Microsoft IDE.
But that's really not the point. The upstream discussion was much narrower, and focused specifically on C/C++ support, which frankly sucks in the windows world compared to the renaissance we're seeing in Unix with our dueling multi-architecture full-support C++11 implementations.
I'm not nearly expert enough to comment on how good or innovative the .NET support in VS is, but I'm perfectly willing to believe it's great.
Your reasoning is really faulty because you aren't accounting for overall growth in the industry. If the world Irish population increases by 20% does this mean that the Asian population shrunk? Of course not.
Almost all video games are produced in Visual Studio, more or less. Even the one for Sony/Wii (plugins/extensions in Visual Studio, etc, although using different compilers).
That accounts for a lot of software out there.
And no, I don't like Visual Studio, but it's the best tool out there for the majority of the people I work with.
For what it's worth, I've been using SharpDevelop almost full-time at work. Then again, I'm more of a Linux guy working at a .Net shop because "Hey, why not. C# is nice enough"
Mozilla has been fighting the MSVC linker's 3GB virtual address space limit for years. Mozilla has asked Microsoft for a 64-bit linker that can produce 32-bit code, but Microsoft apparently has no plans for such a configuration.
Disclosure: I'm a new guy at Microsoft, not in the VS team, just speaking from prior experience as a Win32 developer.
There are still internal limits in the PE32 format which mean parts of the output as a whole will be limited to 2 GiB or so, regardless of the address space available to the linker.
I too have hit internal linker limits from time-to-time, but the span of computing history in which it makes sense to have a 64-bit linker building a load module output with 32-bit internal limits to run on 64-bit processors doesn't seem like the place to be in the great scheme of things. Just my personal observation.
In Mozilla's case the _output_ is nowhere close to 2GiB.
What hits the 32-bit limit is the link-time code generation, which has to have the entire program's AST, plus all the profiling information, plus whatever other data structures it's using in memory all at once.
Same with Qt5 and /LTCG - visual studio can't compile (I've tried out the 64-bit compiler targeting 64-bit since there is a 32-bit compiler targeting 64-bit too).
After checking again with my logs, it was failing while building QtWebkit (ahem - other posts explain it too).
For my custom Qt5 build, I've disabled /LTCG only for QtWebkit (32/64bit debug/release) and was able to deal with this limitation (at some other cost I guess)
What I want to know is how someone can see that their software project has become unlinkable on a 32-bit OS and somehow think that this is purely a compiler problem.
I'm actually about to sit down and write a blog post about VS compiler memory issues, and I had a long talk with the Firefox guys a few weeks ago about this issue. It's not all their fault.
It's not just Firefox; other large projects like Chromium also exceed this limit when using MSVC's profile-guided optimization (PGO) on 32-bit Windows. (Chrome's solution is to build without PGO; Firefox's is to limit PGO to only a portion of source directories.)
Keep in mind that this is the group that thinks de-prioritizing 64-bit windows builds is a-okay while at the same time they need 64-bit windows machines running 32-bit linkers to build their 32-bit windows binaries. ...phew, what a mouthfull.
Mozilla has never struck me as the most... forward looking.
btw, Google Chrome doesn't have a 64-bit version* for Windows. The Chrome team also disabled PGO on Windows because of the same MSVC linker limitations that Mozilla hit.
* Edit: oops, I corrected a typo where I wrote "32-bit build".
Yes, though that is far less a pain in the ass since they use multiple processes. I can't count the number of times I have run out of memory with firefox on windows despite having far more than firefox was using.
For one thing, I don't think their JIT produces code that properly follows the Windows x64 ABI yet. It really should be disabled in 64-bit builds until they do.
Following the ABI is only an issue at module boundaries. If you control every callsite of a function, you can invent whatever calling convention you want.
Their stated reason for de-prioritizing 64-bit builds wasn't technical; rather they say they don't have the manpower to do both. Perhaps the manpower issue is that they don't have enough people to work on the JIT issues on x64 and do everything else, but I do not believe they specified.
I am a bit confused. That thread seems to claim that Microsoft isn't producing a 64 bit linker, my install seems to disagree with them (vc\bin\amd64). Then you claim that the linker won't produce 32 bit code, but /machine:x86 seems to disagree with that. I tested this by building as 32 bit, then just repeating the link using the 64 bit linker and /machine:x86, worked fine.
So what are their problems exactly? That they are building with VS 2005 (from what I saw in that thread) and MS isn't backporting changes to the 2005 toolset?
The problem is link time code generation. With link time optimization[0], which is a prerequisite for profile guided optimization, cl just compiles the source into some intermediate representation. Actual code generation is all done in link.exe at the final link. And /machine:x86 doesn't work with link.exe when using -GL. The x64 link.exe is not capable of generating x86 binaries with LTO.
Also note that the problem still exists in VS2012, and in fact is even worse because the linker memory usage has gotten higher in general. Fortunately we now have a fairly good workaround (developed with help from Microsoft -- maybe some of the people in this thread?) that limits which files participate in PGO:
The following list describes the various versions of cl.exe (the Visual C++ compiler):
...
x64 on x86 (x64 cross-compiler)
Allows you to create output files for x64. This version of cl.exe runs as a 32-bit process, native on an x86 machine and under WOW64 on a 64-bit Widows operating system.
What they are currently doing is running a 32-bit linker to produce 32-bit output on 64-bit windows.
This 32-bit linker is apparently LARGE_ADDRESS_AWARE, so running it on 64-bit windows gets them 4GB to play with (which they are apparently rapidly burning through.) What they need is a 64-bit linker to run on 64-bit windows that can produce 32-bit output.
They use to be doing 1, now they are doing 3. They need to do 4, but apparently cannot. My understanding of this may be wrong, or they may be wrong, I don't know.
I'm not sure where that fits in. Are they wanting to use a 64-bit cl.exe and a 64-bit linker to make 32-bit output? Does mixing and matching the cl.exe and the linker change how much memory the linker can use?
I am baaarely familiar with Microsoft-world development.
You can't use a 32-bit cl.exe and a 64-bit linker to produce 32-bit binaries if you're doing PGO (which is the whole point of this "using lots of memory" issue). In particular, the 64-bit linker can't produce 32-bit binaries when you're using link-time code generation, which PGO does.
I am talking about the cl.exe executable that output x86-32 code itself, not the code it produces. You can use a 32-bit cl.exe to cross compile 64-bit code, but not the other way round.
If you compile with /LARGEADDRESSAWARE and run the executable on win7 64 bits, your process has a maximum of 4 gigabytes of ram. I know, I tested it up to 3.6 g. Not bad at all!
> Often the problem with these queries is that there are not enough devs complaining to MSFT.
No. "Customer complaints" are not a substitute for product design sense.
Any survey of your customers is inherently biased: Your customers, by definition, are people who were willing to buy the product. You're rarely going to discover your biggest problems by counting the number of customers who report them, because your biggest problems are the ones that make people unwilling to become your customers in the first place.
Top voted on SQL Server Connect is a relatively simple issue (on the face of it) that's been outstanding for 8 years in two separate tickets, with no meaningful response from MS.
It took me 9 months and 45 calls to support to get a relatively simple regression in IE9/ClickOnce fixed and all we got was a fucking registry fix that we now have to ship to 2000 clients.
Basically they broke ClickOnce in IE9 for launching via scripts due to the new download prompting stuff.
Neither the framework team or IE team wanted to take responsibility leaving my poor support rep to reverse engineer both products.
Personally I would like to see proper C++11 support instead of playing around with C++/CX and while they are at it, either provide a native compiler for .NET or improve NGEN's optimizer for C++ performance level.
Given that Microsoft had .NET native compilers for Singularity and Windows Phone 8 .NET apps are compiled to native code, it isn't as if they don't have the tooling already available.
Many cross-platform libraries are written in C. Microsoft's terrible support for C causes those projects a significant amount of pain. If the upstream library developers don't have time/interest to restrict their dialect to avoid various C features and to test regularly with Microsoft compilers, they either (a) drop support for Windows, or (b) only support Windows via non-Microsoft compilers. Neither of these options are very appealing for users of those libraries on Windows.
The Windows kernel is C++. From what I've seen (shared source), it's mainly C-style C++ though. I didn't see a single template or a class in my travels.
it's in C. as it was explained to me once, the compiler team asked the kernel team if they would be fine fixing C support at C89 and the kernel team said "sure". being the largest and most important consumer of the compiler (at the time, possibly still today) that was that.
the kernel team is also pretty slow to adapt new C compilers and you can see this by watching the lag in compiler versions released in the driver development kit vs those released with visual studio.
What? Since when was the last time C++ ever produced external symbols readable by anyone else?
If mangling/decoration of symbols was standardized, and how to structure vtables, and how to handle exceptions, and etc. etc. etc. - then I can say "C is officially deprecated". Until then - C is still the lingua franca for native development.
Write in C++ all you want, but expose "C" interface - you would be accessible much easier from anywhere else.
Sometimes it's harded than that. For example in Windows XP, if you have __declspec(thread) variable then it won't work if it's in DLL (this has been fixed since Vista).
Not really a compiler, but more like linker/loader problem. If the __declspec(thread) is in the main .exe it works.
Things like this affect for example ANGLE (WebGL) when compiling for XP, as such one would have to resort to TlsAlloc/TlsFree.
It's just an example, where an OSS might've used the gcc/clang primitive for thread-safe var, and later someone does it for Visual Studio, only to understand later that it won't work when in DLL for XP.
Now I know XP is no longer supported, but a lot of OSS projects still target it. (and probably lots of others).
Then again, it's not such biggie, for example Elmindreda, the GLFW developer was really kind to re-implement the feature once I reported the problem.
"hokey team visualization gimics" keep management off the teams ass, so I appreciate a little attention. Though I ultimately agree, there isn't a lot here that's scratching any of our itches.
Not hard. But how hard would it be to make large existing codebases which have always been built by VC++ compile against them? Very. Let alone the institutional aversion to admitting their competitors have done something better than them...
Personally I've wanted a project type without a "compile" in VS for a while... mainly for other environments that serve as a client/service for the VS stuff... I often use a "website" project and uncheck it from "build" but that's not quite what is needed.
Mainly, as an example, I want to be able to have my NodeJS webservice project in the same solution as my MVC website project, and the C# dll client project. The NodeJS solution doesn't need all the assembly folders, or a .Net compile... but it does need the configured prebuild step, that's it. People say to use a "Website" project, but that brings my entire node_modules tree into source control. It's a pain.
It should be noted that features listed as included in the VS2012 Nov CTP, such as initializer lists, variadic templates, etc., should not in my view be listed. The CTP does not work via the Visual Studio interface, but only from the command line compiler. And the CTP is not included in either Update 1 or Update 2.
From the article: "I will not, in this post, be talking about many of the new VS 2013 features that are unrelated to the Application Lifecycle workflows."
It seems that the OP talked about what the blog's focus was first, then will come in later (maybe on a different blog?) to talk about anything else.
Yeah, it seems I was overly optimistic when I wrote that post on CppRocks. I was thinking that they would release another update of the CTP features pretty soon, but it doesn't look like it's happening...
Yet another Visual Studio, yet another truckload of "features" that do nothing to help day-to-day, heavy-lifting programmers. (And which probably help by adding bugs or just bloating the system). Sigh.
I am hoping they have made substantial improvements that are not mentioned in this blog post.
Note that the blog author is part of the Team Foundation Server organization, so it's natural that he's talking about those features and not the programming language and runtime features.
... though I guess if you look at the rationale behind all these features as "get people who buy into these features so tangled up in them that they can't use anything else, or reasonably port their applications to another OS", then it all makes sense.
Of course, that is the opposite of what I as a software developer want.
The VS keyboard macro system has "overengineered" written all over it. Really all they had to do was record some keystrokes. Instead they probably had a team of like four engineers, a bunch of Q/A and several PMs on it, and it took them over a year. And the result /stinks/; it blows dead exploding goats.
As someone who uses reharper to make the IDE cool, git for source control, teamcity for builds, nunit for C# tests, mocha and testacular for js tests, and a goddamn whiteboard, cards, pens, and blue-tack for "agile portfolio management", I'm not seeing a lot here that interests me.
I'm underwhelmed with the desire to suck all activities into the one tool to rule them all. Especially when it rules them from bleh TFS.
I want VS to let me write code. I want to find and manipulate text. I want it to compile fast and produce relevant warnings. I want a debugger. I want it to host duiverse plugins. That's mostly it. ALM? I'm probably doing whatever it is that you mean by that and I don't care about integrated tools for it.
I want VS to let me write code. I want to find and manipulate text. I want it to compile fast and produce relevant warnings. I want a debugger. I want it to host duiverse plugins. That's mostly it.
That's all I want Visual Studio for, too. Fortunately those things will continue to work and we can continue to ignore things like TFS and mstest.
It would be really nice if you could strip the things out of VS that you don't want. For me, all I really want is the code editor, R#, NCrunch, nuget, and the debugger. There might be a few other things that I didn't think of right away. I personally prefer to do source code management (git) outside of the IDE, because I find visual studio doesn't tend to make good choices about what should be controlled.
If I could make VS just do those things and nothing else, that would be really nice.
Way way way way, WAY too soon Microsoft... most shops, including my current employer aren't even on 2012 yet... Honestly I prefer VS to be tied to desktop OS releases or SQL release or something other than yearly.
This is not a Madden game, nobody is asking for yearly updates for Visual Studio.
I find this response almost shocking. Most of HN criticizes microsoft for slow releases cycles, slow change but the Visual Studio has clearly made progress for the better by addressing UserVoice requests, integrating Git etc.
If your employer is to slow to keep updated, why should that slow MSFT down for you while the it is trying to reinvent itself, just catch up in 2015.
Most of HN criticizes microsoft for slow releases cycles
Just because it's trendy to bash MS for not releasing new browsers/IDEs/whatever every few weeks, that doesn't mean the vocal critics speak for all of us, nor even necessarily for the majority. I get paid to build stuff that works, and doing so requires stable foundations and reliable tools. Lately, I wish we had more Microsofts in the world paying attention to things like backward compatibility and long-term support, and fewer Googles and Mozillas and Apples who are quite happy to push out updates that break useful things that worked before.
There was nothing wrong with releasing a major version when you had major new features, more frequent minor versions that introduced minor features in a compatible way, and point releases for urgent security issues or bug fixes that didn't change any intended behaviour at all. This approach has worked for a long time, and IMNSHO the recent regular, fixed schedules for updates regardless of actual changes aren't doing most people any favours, and the version-free, always-online, push-updates-whenever stuff is just a mess that causes far more trouble than it's worth.
>Lately, I wish we had more Microsofts in the world paying attention to things like backward compatibility and long-term support, and fewer Googles and Mozillas and Apples who are quite happy to push out updates that break useful things that worked before.
No, I really did mean backward compatibility. Specifically, I was thinking of the degree to which MS have supported even very old (by IT standards) APIs and file formats and protocols even in much newer (again by IT standards) systems over the years, and the lengths they have sometimes gone to in order to maintain that support despite the potentially breaking changes not even being their fault in many cases.
However, I would also agree that they are better at forward compatibility than a lot of other major software developers today.
If your employer is developing PLUGINS for a specific certain other product, then it has to stick to the COMPILER the product was released for (CRT/C++ runtime compatibility).
So here is one reason. Another one - is money - why spend every (or two) years money on product that is not bringing that much to the table (for a lot of projects at least I've been)
nobody is asking for yearly updates for Visual Studio
Ahem... I am. I would much rather have smaller more frequent updates to important tools than huge disruptive ones. Short feedback loops are a good thing.
> I'm pretty sure I can upgrade to a service pack. Changing versions, OTOH, probably requires approval from management.
Two things are apparent:
1) The problem you are addressing is a management problem in your organization, not a problem with Microsoft releasing a new version of VS.
2) You aren't the decision-making customer for Microsoft VS at your organization, so even the problem you had was with VS, it wouldn't necessarily be a problem MS would have much incentive to address.
That's what service packs are for. With a product that has as big a footprint as Visual Studio it will be impossible for shops to keep up with the annual versions. Many will simply delay updates. Ironically, increasing the speed of major release may actually delay the introduction of some improvements by customers.
How does it hurt, though? You say you're not running 2012 yet, so obviously you're not feeling forced to upgrade before you're ready. So why shouldn't they put out the latest bits when they've got them ready and we users can start using them when we're ready?
If you've ever had to maintain a large, legacy .net app that has been around since 1.0 or even has some ASP 'classic' code in it, upgrading is a major nightmare. I know at least two shops right now 'stuck' on 2005 because various problems upgrading to 2008, let alone to 2012 or 2013 now. They are more likely to re-write the whole thing in Ruby at this point.
I'm actually running 2012 personally, not such a big deal on that front, my side projects tend to be pretty manageable and not hugely large or sophisticated.
But yeah, you have anything large and enterprise-y, the pain just amplifies each version you are behind.
To be honest I dragged a 1.5MLOC .Net 1.1 Web Forms and SQL 2000 application up to 4.0 and SQL 2008 R2 last year.
It wasn't that much of a problem. It took about 3 days and most of that was fixing deprecation warnings and porting the in house test framework to NUnit.
The real problem in the process was getting the build tooling and all the associated crap surrounding the solution up and running. This was also only because the muppets who wrote it originally put it together with sticky tape and string.
You can install several versions of Visual Studio side by side - keep an old version for maintaining legacy projects, use a new version for new projects.
Version 4.5 was designed to be backward compatible to version 4.0 and is indeed an in-place update. Therefore after installing version 4.5 you will no longer have version 4.0 assemblies but you can still target version 4.0 using the backward compatible version 4.5 assemblies.
Nevertheless I have read some worries that some bug fixes (may) break backward compatibility and therefore projects targeting version 4.0 developed on machines with version 4.5 installed may behave differently when run on machines with only version 4.0 installed. If this is a real problem, that is if there are observable changes and therefore version 4.5 is not fully backward compatible to version 4.0, I can not tell.
UPDATE: There is a published list of breaking changes [1].
After reading through the two linked pages I can not see the relevance. The first article just discusses the in-place update but shows no (real) problems with that. The problem in the Stackoverflow question turns out to be caused by a bug in the OPs code and not by a breaking change between versions 4.0 and 4.5. Of interest may be the linked list of breaking changes [1] and given the size of the .NET framework it is a very short list.
Even if there was only one issue, that is an issue to spend effort working around it if you're the poor soul that needs to do the fix for the product that broke.
I am still unable to see the problem. It's your decision if you switch to 4.5 or not. If you have projects targeting 4.0 and are unable to ensure that you code does not hit any of the documented breaking changes or if you are unwilling to take the risk of hitting unknown or undocumented compatibility problems, then don't switch to 4.5. Or take a hybrid solution - develop against 4.5 but test against 4.0.
We have deviated quite a bit from the original comment - you can install different Visual Studio (and .NET framework) versions side by side if you have to support legacy projects. There are some case where this will not work - obvious examples are .NET 1.0 not being supported since Vista, .NET 1.1 since Windows 8 - but there are still a lot of cases where this works just fine. If an side by side installation is not possible, you can still just setup up a second machine or a virtual machine for you legacy development needs. Not being able to upgrade Visual Studio because you have to support legacy projects is mostly a non-issue.
In our case teams that are supporting .NET 4.0 are not allowed to upgrade for .NET 4.5, precisely because of the .NET 4.5 bug fixes that lead to different behaviors between versions.
Seriously! My company has been keeping up with the latest stuff but I have hard time understanding what the justification was behind releasing a whole new Visual Studio version. Are there new features that won't be available in VS 2012? I guess we'll see at the Build conference.
VS is not there to make money directly, but to sell MS platforms - Windows, SQL Server, etc. They give it away for free to students, startups and pretty much anybody who asks. The only reason all editions are not free is to not kill the other commercial Windows IDEs.
I'm actually a bit relieved to see a new version of VS so soon. The Entity Framework team said that v6 would come out with the "New" Visual Studio and I was wondering how long that would be. Apparently it's a lot closer than I thought.
Personally, it's a bit unnerving to see a new version instead of another update so soon imho... an upgrade is nicer than having yet another version of VS installed... the VS installer installs so many bits and pieces, it's nearly impossible to get rid of all of the last version to upgrade to the new one.
I'd rather see a 2012.3 version... I'd also like to see a LOT more stability, as well as a non-building project (for external systems) that still has a pre/post-build event, but no compile step from inside VS... (mainly for projects that use other runtimes/build systems but make sense to include in a VS solution.
I know everybody is hot and bothered about C++11, but honestly I'd suggest just getting proper support for things we've needed for a decade or so before chasing the new shiny.
I was asked to renew our expiring VS licenses at my previous role, and was all but paralyzed by the bevy of options and bundles available.
What is the motivation behind having 10+ bundle and service options? Is it to trick us into buying the wrong thing, then forcing us to buy other addons that we initially didn't realize we needed? Is it an attempt to maximize sales by offering tons of bundle options to extract the maximum value out of the variety of customer needs that exist out there?
I guess shops that are even looking to buy VS are so locked into VS that we will spend the cognitive energy to figure out where the best value is for our needs, but this current method just doesn't seem elegant or efficient to me.
What is the motivation behind having 10+ bundle and service options?
Maximize revenues.
The trick is that many users will opt for the most expensive option because they don't understand or are intimidated by the marketing / licensing material and therefore make the safest choice. If you don't know what you need ( and most large corporate shops don't) you buy everything.
Having lower priced options mostly serves to hide the true costs of the product.
They don't support inline assembly because it messes with the register allocation in their compiler. It's likely they'll never support it because of this. :(
Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.
Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off".
This pisses me off, too. I really should patch the menu resource someday and change it to lowercase. I also don't understand that Microsoft violates its own HID guidelines; shouldn't they set a proper example?
I don't mind it. It's not like it's high-contrast, large font, or bold, it just serves as a heading for the menus they show without making the font large or bold.
It's definitely nowhere near as annoying as intellisense randomly breaking and putting a red line underneath half your statements.
"Posted by JoeWoodbury on 25/02/2010 at 10:16 (...)
The solution is to add the location of the stdafx.h file to the include path list. This is often a matter of simply putting a dot comma (.,) as the very first item in a projects include list."
It's precompiled headers and it can be proved: If you would try to turn off precompiled headers in your project and rebuild it, the compilation would fail in all the files in which you see the red lines. It's because your project has subfolders and the files in the subfolders reach stdafx only by accident of having the "use precompiled headers" turned on. So once you fix the include path, and that you can do on the project level, your project will compile independently of "use precompiled headers" settings and the intellisense would work too. I know from experience, I fixed the big projects on which I worked.
EDIT v2: Looking at the project you cited in the reply of this message and the file: http://code.google.com/p/freetype-gl/source/browse/trunk/dem... on the computer on which I don't have VS, I still have an idea what the reason for such errors are: the project authors use something like:
#elif defined(_WIN32) || defined(_WIN64)
Now think about it: when intellisense parses the file, can it assume you're building a 32-bit version? No. Can it assume you're building a 64-bit version? No. So intellisense doesn't use the header inside that elif at all. Setting defines in the project can fix that.
I'm not using PCH in the most recent example. I've been playing with this project: http://code.google.com/p/freetype-gl/ , so it includes Freetype, GLUT (using freeglut), and opengl stuff. Builds fine, intellisense lines everywhere.
I actually prefer the all-caps menu. I find that the letters being the same height improves readability for me. I wouldn't want to read a book that way though.
It's similar to making notes by hand, I'll generally use all-caps when doing so.
I personally don't mind the menu caps, but I do find the way they are forcing their design choices on everyone somewhat antagonistic, and generally not smart.
The same issue happened with their forced grey or white color schemes. It's not hard to give users the option to switch things like this to suit their preference. Yet MS has deliberately not provided any way to adjust these settings, and this gives users something to rant about, and rightly so. Really not a smart move in my view.
You can edit the scheme after hunting down and installing an extension which was not available at the initial release. The same could be said for the start menu on Win8. MS could have easily quieted the many loud voices by including options so the user can choose their own preferences.
Every release since 2008 has been getting slower and slower for C++, the C compiler is awful, PGO instrumentation in Win8 is less capable, and there's no equivalents in the Windows ecosystem to gcc's likely/unlikely, oprofile, or valgrind.
It's at the point now where developing on Windows vs Linux is a serious performance and security impediment due to their withering native toolchain. But I guess nobody over there gets promoted for fixing the hard stuff.