My daily readings 09/24/2012

  • tags: programmer

    • The problem with code quality is that there’s so much AND-ing that most people give up on understanding this massively difficult problem that is as much social and industrial as it is technical.

      One of the first things you learn about systems is that parallel dependencies (OR-gates) are better than serial dependencies (AND-gates). The first has redundancy, the second has multiple single points of failure. That’s also true with regard to how people manage their careers. Naive people will “yes, sir” and put their eggs into one basket. More savvy people network across the company so that if things go bad where they are, they have options.

      To have code quality, you need people who are good at writing code AND reasonable system designs AND competent understanding of the relevant data structures AND a culture (of the company or project) that values and protects code quality. All of these are relatively uncommon, the result being that quality code is damn rare.

    • You can normally fix bad code – fixing bad data structures is not usually easy or even possible.

      It’s why I’ve still not fully bought in to ‘release early release often’.

      I prefer to defer releasing for production use until really satisfied with the structures – this way you have no barrier to ripping the foundations up.

      If not 100% comfortable with the model – prototype a bare metal improved one (schemaless DBs ftw) – if it feels better start pasting what logic/tests you can salvage from the earlier version and move on.

    • I’m in the position of maintaining a legacy codebase. I feel like I’ve shown up half-way through a game of Jenga and management still wants me to play the game with the same speed as the guy who played the 1st half.

      Meanwhile, he’s been promoted to start work on a brand-new Jenga tower since he’s demonstrated such remarkable success in the past.

      I just want everyone to stop playing Jenga.

    • release early release option gives you chances to fix your mistakes before its too late. waterfall works great if you can manage to get your data structures perfect before production. it begins to fail when it becomes prohibitively expensive to fix mistakes after production, where not unexpectedly breaking old code has precedence over deploying new.

    • There’s other options than “release early release often” and “waterfall”.
    • What might those be? Honest question. My inexperience is probably showing, but I have a hard time picturing what the middle ground might look like.
    • One of the classics is basically the mid point between them. AKA throw one away.

      Requirements desgin, DEMO, Requirements, Design, Implantation, Verification, Release, Maintenance.

      The idea was to use a prototyping language for the Demo and then a production worthy language for release. The problem was a lot of Demo’s ended up in production for a literally decades because people tryed to create ‘over architected crap’ which got scraped or takes decades to release.

      Honestly, I think the real problem with most early Development strategy’s is so few people have a clue how to actually design good software. Great solutions are designed to create minimal systems to solve the problem which are flexible because they are minimal and programmers can alter code, rather than people conceiving every possible change request up front.

    • One of the big tricks here is not to be religious about any technique. Pick what fits best given the information that you’ve got for the situation that you find yourself in and don’t be afraid to change the mix over time as the situation changes or you find yourself in the possession of new knowledge that is inconsistent with your past views on the state of affairs.

      If you blindly adhere to some method or other then you’re going to find out exactly what the limitations are so you are going to have to be flexible and you’re going to have to mix-and-match as time goes by.

      As an example, ‘agile’ comes up here with some regularity. It’s a great principle but it’s not a religious thing. Feel free to adopt some but not all of agile to come out ahead. Adopt all of agile in a religious fashion and you’ll come out a loser.

  • tags: design readability

  • tags: security

  • tags: sysadmin

  • tags: MySQL innodb

      • There is a more complete answer with regard to InnoDB


        Keep in mind the busiest file in the InnoDB infrastructure : /var/lib/mysql/ibdata1


        This file normally houses six(6) types of information


        • Table Data
        • Table Indexes
        • MVCC (Multiversioning Concurrency Control) Data 
          • Rollback Segments
          • Undo Space
          • Table Metadata (Data Dictionary)
          • Double Write Buffer (Background write to prevent reliance on OS caching)
          • Insert Buffer (Managing changes to non-unique secondary indexes)
          • Makes the table’s data and indexes contiguous inside ibdata1
          • It makes ibdata1 grow because the contiguous data is appended to ibdata1
    • You can segregate Table Data and Table Indexes from ibdata1 and manage them independently.


      To shrink ibdata1 once and for all you must do the following:


      1) MySQLDump all databases into a SQL text file (call it SQLData.sql)


      2) Drop all databases (except mysql schema)


      3) Shutdown mysql


      4) Add the following lines to /etc/my.cnf




      Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.


      5) Delete ibdata1, ib_logfile0 and ib_logfile1

    • At this point, there should only be the mysql schema in /var/lib/mysql


      6) Restart mysql


      This will recreate ibdata1 at 10MB, ib_logfile0 and ib_logfile1 at 1G each


      7) Reload SQLData.sql into mysql


      ibdata1 will grow but only contain table metadata


      Each InnoDB table will exist outside of ibdata1


      Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table


      mytable.frm (Storage Engine Header) mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)


      ibdata1 will never contain InnoDB data and Indexes anymore.


      With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable and the file /var/lib/mysql/mydb/mytable.ibd will actually shrink.


      I have done this many times in my career as a MySQL DBA


      In fact, the first time I did this, I collapsed a 50GB ibdata1 file into 500MB.


      Give it a try. If you have further questions on this, email me. Trust me. This will work in the short term and over the long haul. !!!

    • UPDATE 2012-05-29 20:45 EDT


      With regard to Step 07, to speed up a reload of a mysqldump, it is best raise your bulk insert buffer. Your changes to my.cnf for Step 04 should now look like this:




      UPDATE 2012-09-20 08:22 EDT


      With regard to the bulk_insert_buffer_size, I am making a retraction.


      I had written a post in the DBA StackExchange stating that the Bulk Insert Buffer has no bearing on InnoDB, and it still holds true. Sorry for any confusion.

    • T-Mobile’s network modernization in Las Vegas is now complete, which means the carrier can now support mobile broadband speeds on the iPhone. T-Mobile CTO Neville Ray announced at GigaOM’s Mobilize conference on Friday that T-Mobile will begin marketing its “4G” HSPA+ service to unlocked iPhone users in Vegas on Monday.

Posted from Diigo. The rest of my favorite links are here.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: