It comes natural to be careful when you got something at stake, and generally I would not suggest there is something wrong with it. It makes you focused and it might make you avoid sloppiness, but I consider it important to acknowledge the limitations of this mentality when you are collaborating with a team on a shared codebase. In the context of software development I first encountered this way of thinking as early as the first assignments at the university. We used to stay up late the night before the deadline and tinker haphazardly with the code until we got something that resembled the stated requirements. We scrutinized the code a couple of extra times before we handed in what felt like a house of cards assembled with glue and duct tape, and we hoped that this extra inspection would catch any remaining mistakes. Needless to say, this way of working is severely unprofessional and we all swore to never do anything similar once we got any "real job".
After a few years of experience and plenty of discussions with other developers in the industry, I have concluded that most organizations have introduced mechanisms to avoid this kind of last minute code scrutiny, whether they have dedicated testers, quality assurers or automated acceptance tests. What concerns me is that even when people put these mechanism in place, I often detect that the underlying mentality is still there, and I would like to make some notes on why it is important to do something about it, and what you can do to reduce your dependency on it.
Defects come from incomplete communication, not only mistakes
The illusion of effectiveness of this approach is often enforced by the belief that defects are caused by mistakes. Indeed, people do make mistakes, often, and any self-critical developer with some sense of professionalism knows this and have serious strategies to manage it. Introducing code reviews and pair programming might be a step in the right direction, but what people tend to miss in the context of large scale systems is that communication, by definition that you translate a comprehension from one brain to another, always is incomplete. Whether this communication is between stakeholders, project managers, developers or quality assurers.
Without a well-defined specification or low-level test coverage, these communication gaps will be the breeding grounds for defects since people will always have a certain difference in expectations. With this reasoning I will argue that even if you and your fellows are top-notch developers and never make any "mistakes", you will still have a fair amount of defects in your system. Hence the incompleteness of this way of thinking.
The need for carefulness is fear in disguise
When people feel the need to be careful, they are put in a situation that gradually converge towards fear and uncertainty. Fear is a mind killer. Without even elaborating the underlying neurological effects, it should be obvious that fear and uncertainty will induce stress and anxiety, both prominent killers of creativity and productivity. If you are ever nervous about committing a change to your source code, you should consider this as a warning sign.
The psychological fallback—especially for people under stress—is to do what they always do. Keep doing what you are good at and hope for the best (side note: consider the NDC 2011 talk on deliberate practice by Corey Haines). Even if fear is not a prominent factor, you should still consider that the effects of this kind of behavior might do long term damage to your team in the sense that it impairs the learning process. If you are trying to adapt a new methodology or introducing a new project structure or process strategy, this mentality will make people resist change. When deadlines start to loom, people simply drop the new ways of thinking for the comfort and safety of old habits. This makes progress slow.
Fear of changing code leads to localized bug fixes
When people encounter uncertainty in the code they tend to rely on the least intrusive fix in order to patch a bug or circumvent a problem. When people fear big changes due to regression risks and at the same time have to comply with new bug reports, the attitude will eventually boil down to "fixing bugs without changing any code". This situation is contradictive in nature and will encourage quick-fixes, dirty hacks and duct tape. Even if you are very talented , it is hard to maintain discipline and avoid technical debt unless you refactor your code continuously, which itself is impossible if you don't dare change any code. These small and localized fixes will eventually pile up and you are on a steady stroll towards a big ball of mud. In reality, managing to allocate time for cleaning up or battling technical debt is seldom feasible. It is simply hard to estimate and quantify the problem, and there's always new features and customer requests that take higher priority.
Remedies - what you can do to improve
Effective software development is an endless topic, but for the sake of the subject the remedies will be kept short and focused on the perspective of improving the confidence you have in your code. All of them are related to communication and how to improve it.
Write clean and understandable code
Invest some time and read Clean Code. You should understand the importance of something as trivial as naming a variable correctly or keeping methods short and concise. You should appreciate the value in needing half a second to comprehend the purpose of a variable compared to three seconds when you need to skim through large quantities of code. If you manage to establish a consistent nomenclature in your code and get your team to value these principles, it will become a lot easier to trust the code. In my opinion, readable code by itself might be a quite shallow norm for software quality, but it will make things a lot easier to work with and I would intensively recommend it to anyone who takes programming seriously.
Doing regular code reviewing seems to be somewhat of an industry norm by now, and it is very valuable since it provides a medium in which to communicate information about the code between different developers. It is a chance to give feedback, provide new perspectives and hopefully catch some of the plain old human mistakes.
An interesting note regarding "carefulness" however, is that even if you have a fault-tolerant environment and hope that your peers will catch the mistakes, in my experience it doesn't really reduce the need for carefulness in any significant way. It merely distributes the blame for those mistakes that pass through. On a similar note, you shouldn't even focus too much on the actual mistake catching as the purpose of doing code reviews. A review riddled with red text doesn't have to mean you did something wrong, it might just mean that you and your team have a lot of synchronization to do before you implicitly agree on a culture of code style. Code reviews are highly recommended, but as many strategies it is incomplete by itself when it comes to establishing quality.
Different levels of specification and testing
People focus very much on having tests simply to catch regressions. There is of course great value in this, but people often miss the enormous value in having tests simply to improve readability and to communicate the purpose of the code. Even if you have a perfect structure, traceable flow of control and well-defined function names, there is a limit on what you can communicate without having tests that describes what should be done and not only how to do it. It grants a great sense of confidence and trust when you can look at a function and say "yes, that function should do that, because the test targeted for it says so". Functions almost becomes correct by pure definition.
All the intermediate communication steps between your customer and your bits and bytes are really just different abstraction levels of the same thing. On a small scale project, everything seems to work as long as you're being pragmatic, but with bigger teams and wider timespan the need for formalized communication increases. By "formal", I mean some kind of communication that can be verified and/or translated in some kind of automatic fashion. In practice, this means different levels of specifications and different levels of testing.
Depending on the nature of your software and its domain, there are several different approaches to describe high-level specifications, and you can even describe certain aspects of it in different languages and formats. If your target domain is very specific, you should consider creating a Domain-specific language using tools such as xtext. If your application is somewhat generic you should consider specifying the behavior of the system using techniques as Behavior-driven development and tools like RSpec and Cucumber. If you use FitNesse you can create wiki-like specifications that are easily combined with testing. As a low-level specification, you should definitely use unit testing.
These tools and techniques all have a common denominator. They provide a medium in which to communicate expectations at different levels, and they provide the mechanisms to verify them. This is the core of effective programming and the shortened feedback loop they provide is essential. As explained in Specification by Example, this living documentation becomes valuable (especially in long-term projects) because they provide a medium in which stakeholders, testers, developers, graphical artists and quality assures can communicate together, and the continuous verification that the tools provide enforces the validity and that the documentation is up to date.
You don't need to apply all tools and techniques right away. Start small, read some background on the different methods to get an understanding on what they try to achieved and how to combine them. Be sure however that you don't make the tools and techniques the end goal. They are merely tools for making software development more effective, and you should get a fundamental understanding of which tools are suited in which situation, because none of them are complete or perfect. When you realize that there is so much more value in having good tests beyond just catching "mistakes", you are on a good path of establishing more confidence in your code.