LAYING AN EGG WHILE JUDGING
Discussion with Theodore Shih, Denver, Colorado and Dennis R. Voigt
The egg is a pretty amazing way to produce offspring. Unfortunately, some eggs never hatch, some just get cracked and some even get rotten. As a food source they are full of good stuff, long-lasting and able to be served up in countless ways. Interesting, we’ve heard judges at retriever tests referred to in “egg” terms, such as “they sure hatched a good one there!” Or, more commonly, “the judges laid an egg!” meaning they laid a rotten egg!
Laying an egg, while judging, is everybody’s nightmare. Sure, sometimes things don’t go quite as planned but designing and evaluating a test that nobody likes is something to be carefully avoided. Last issue, we talked about wind and ended by introducing the important role of factors. Understanding factors is key to designing a good test to achieve variation in performances. Just as important however is being able to set the right level of test, designing tests that are fair and robust throughout the day and good time management. In this issue we will discuss these ideas in hopes that it will help us all avoid “laying a rotten egg!”
Variation in Performance
Dennis: Ask around, “What is a good test?” and you will get many different answers. I think there are some obvious ones but among the most important characteristics of a good test is that it is fair and the same for all dogs and that it is matched to the level of the stake and the quality of the field. I have a simple measure of how well a test meets those criteria. I simply look for variation in performance that stays the same throughout the test. For example, if all the dogs do very well, the test is too simple. If most of the dogs do very poorly, the test is too difficult. If I see some dogs do very well, some very poorly and some just average, I conclude the test is at the right level. If the test starts off difficult and becomes easy (or vice-versa), conditions have changed making it unfair. These two criteria, consistency of your test and variation in performance, are characteristic of a good test in my book. Without them, you just might have laid an egg.
Ted: I guess I have never thought about variation in performance as a guideline, but now that you mention it, your standard makes a lot of sense. But, it only describes what we are trying to prevent, it doesn’t really tell us how to prevent it.
Dennis: I agree. It is simply a measure of your test. It is also something that you can do after the test to evaluate how well you did. One of the tools that I use is a running tally sheet in the back of my judges book. It has the dog’s number and boxes for each series. As the dog runs I check off “ back,” “ out,” and ? (need to talk or see more dogs). I also make notations for pick-ups, handles and start time of test. At a glance, I can see how well things are going and at the end of the test assess where we’re at including the variation. After a trial at home it’s easy to do a simple analysis of how variable work was, distributed throughout the day. You can even see if callbacks are clumped. Pretty easy to give yourself an honest bummer or 5-star rating.
What do you think we need to do to prevent laying a rotten egg?
Ted: I think for starters that you need to have some fundamentals about how you set up tests. In the All-Age Stakes, as long as the fundamentals are present, either as a contestant or as a judge, I am not too concerned if the test is too difficult. Championship points are at stake and I believe that they need to be earned. I would always prefer that:
I so rarely see tests with all of the fundamentals that when I do see one, I am basically satisfied. If the test is fundamentally sound and challenging, I am in hog heaven.
If the test is fundamentally sound and too easy, most of the time, I attribute that to judges that lack dog knowledge. On rare occasions, judges that I know and respect, come up with tests that are too easy.
But, I don’t consider tests that are fundamentally sound, which turn out to be too easy, to constitute “laying an egg,” unless the judges pencil whip the field to account for the ease of their test.
Dennis: I like your fundamentals to help with setup and evaluation of the test. But I do consider judges that have to “pencil whip” the field because of too easy a test as having laid an egg.
A lot of tests that I have seen were considered “too hard” by contestants because of a lack of one of your fundamentals. Fundamentals or not, if I see I see a 60 dog entry, including some of the top dogs in the country, and I see that 75% of them can’t get the opening series of marks without handling – that’s a problem. I’ve seen one trial where only 10 out of 80 dogs got the birds without a handle or a horrendous hunt. I think these things are a problem that happens more often than it should.
On some occasions it is due to a really deceptive situation, such as a retired gun tight in front of an out-of-order flyer. This can fool most dogs into running long and not stopping the next time either. I think it’s quite easy to set up a test that it too hard. I like to think I could out smart any dog by challenging them with something extreme but I sure don’t think that is my job as a judge. I hate to think that a judge is so egocentric that he willingly strives to make dogs look bad. Personally, I like a test that is challenging and difficult but if it makes too many high-end dogs look bad, I think there is a problem. I saw a test once that made 5 National Champions look like idiots. I had to conclude that the problem was the test not the dogs. It’s hard to tell by looking at the test whether it was set up too extreme because the judges ramped it up to make sure it wasn’t too easy and they might get burned or whether they want to appear commanding or unique or they had a malicious agenda. In other words, did they lay their egg intentionally? I personally don’t think a lot of poor tests are intentional but I do believe many could have been avoided with more thought and care. I think working to ensure fairness to the dogs and equality throughout the day goes a long way to having a successful trial.
Dennis: On setup day, I really try to talk with my co-judge about whether we are doing everything we can to make our test equal and fair for all dogs. Among the key discussions here are those about lighting (and thus shadows), wind and scenting. Since Mother Nature is the primary controller here, these things can never be taken for granted. To be fair, you almost have to become obsessive about considering them because you can only anticipate so much. I personally find it frustrating to arrive at the test site first thing in the morning as a handler and see that there will be obvious inequalities developing. If that is so obvious at first glance I can only assume that the judges were simply negligent in considering them. More than once I have arrived to find the wind switched 180 degrees or the sun not far enough off or shadows much longer than I thought they would be. The result should always be to make an adjustment if possible. Too often, I see judges just shrug their shoulders and say, “we can’t control it!”
Ted: In our last article, I wrote about preparation as the key to success in judging. Like you, I am very obsessive about my judging assignments. I believe that my work as a judge reflects directly on me. Consequently, I want the product I present to the handlers to be the best that I can possibly produce. And that requires a lot of hard work and preparation.
Here is a list of things that I try to do to help make test fair for all dogs plus give them a real chance at showing their strengths.
In addition, there is much we can do with bird placement and incorporation of factors to help design good tests that are easy to evaluate. But the fair and robust test that holds up all day helps keep both judges and contestants happy.
Dennis: We’d be remiss if we didn’t emphasize the importance of time management in preventing the rotten egg test. When judges run out of time, test quality suffers, callbacks get severe and poor decisions are made. Time management involves setting up a test that takes the correct amount of time given the logistics of line and test location and it considers the club’s mechanics. You need to factor in bird changes, no-birds, lunches, waiting for dogs and pick-up delays.
There is a lot you can do to save time. Do everything in your power to start on time. Sometimes, club mechanics get you but make sure it isn’t your fault that the test isn’t ready. You can save a lot of time by having quick callbacks. If you have to debate for long about a dog better to just bring it back. Get the “guns up” as the dog is returning, have the running dog deliver on the honour line, call dogs to line promptly and time your bird/gunner changes with lunches, etc. A minute or more saved per dog is easy to achieve, which for a 60 dog entry is a full hour – time you may need Sunday night.
Ted: Actually, I think that rather than be the last issue for discussion, this should be the first issue of discussion.
I think everything starts with time management. I don’t think enough judges really give sufficient consideration to time. When the judges don’t pay attention to time, the odds of laying an egg increase significantly. Dogs are run in poor conditions. Tests are split. Callbacks are brutal.
When I judge, time management is a primary driver for my decision-making. Time tells me what I can and cannot do. I may come to a field and see a test that I would love to run, but if it doesn’t fit my allotted time, it must be discarded.
For example, I will be judging in Florida this spring. I look at my Sunrise/Sunset table for April 1, 2011, and see that Sunrise is at 7:23 am and Sunset is at 7:52 pm. If we start running dogs at 8 am, and finish our day at 6 pm, I have 10 hours of light with which to work. (If we go until 7 pm, I have another hour.) I am judging the Amateur. For purposes of discussion, assume that the Amateur starts on Saturday and ends on Sunday, further assume that there are 60 dogs entered. If I simply want to do marks on Saturday, I have 600 minutes with which to work. If I have a ten minute test, I will not finish on Saturday when you consider no-birds, re-birds, gun changes, lunches, waiting for handlers, etc. So a ten minute test is guaranteed to run two days, and will make Sunday a mess. So by not paying sufficient attention to time, I have laid an egg.
In contrast, if I have a quick seven minute test, the test will take seven hours, I can add an hour for re-birds, lunches, etc., and because I am pretty efficient with callbacks, I probably have two hours to run a land blind. Depending on the number of dogs called back, I might be able to get through a land blind, but it would be tight. If I don’t finish the land blind on Saturday, or dogs are running in the dark, and neither dogs nor handlers can see one another, I have laid an egg.
If I have a six minute test, I am pretty certain that I can get two series under my belt on Saturday. Which leaves me with only a water blind and water marks for Sunday.
If I am certain that my test will whittle the field significantly, I might not care if I didn’t get to a land blind on Saturday, but if I guessed wrong, and the dogs ate my test alive, I would really be in a bind.
The point is this – time management is critical.
Dennis: I can’t argue with you regarding the importance of time management – I guess that’s why we wisely left it for last. However, on reflection about how I go about judging assignments, I realize that I don’t start with time management as the first consideration. Rather, I simply consider it in every decision I make with my co-judge about the setups. That means that I seek the fair and fundamental test, as we discussed earlier, but I am doing that with many other considerations in mind. Some of these are mechanical which is surely related to time. I other cases, I am focusing on things such as day-length, fairness of rotation, other stakes, club mechanics and grounds and the forecast weather. I try to make every bird count and don’t have an extra just for a delay. I guess I don’t even think of these things as being “time elements” any more. To me they are just ways I view good test design but I do try to consider all the conditions that will prevent a screw up and will produce a good fair trial. But I guess you are right that all other considerations on setup day must be in the context of good time management. Regardless, once you have your test set, then my priority becomes “DO NOT WASTE TIME” on test day.
One of the tools that I have used for both test design and smooth mechanics is my setup day checklist. This is something that I have employed since earlier versions appeared in the old Retriever Field Trial News many years ago by Sal Gelardi and mostly in relation to judging Nationals. Since then, I have used this abbreviated version as a checklist when I am doing setup with my co-judge. I have given it to many co-judges over the years including at judging workshops. I think it is a formula to help avoid “laying a rotten egg.” These days, I find I really do not have to carry it around because I think about all these things constantly. I still often have a look at it at days end in the motel before test day. (See my checklist in the box below.)
Ted: I have a similar list. In my case, it’s spelled out as things that “I Believe In” rather than a checklist of considerations. Some of these I have already mentioned above but I include them here to have the whole list together.
Dennis: I suggest that between our two lists a lot of things are considered that would help us have a good trial. Combine them with good time management and our odds go up even higher. I guess the only thing left is how well we place our birds and incorporate factors. That’s a huge topic, of course, perhaps best left to a future issue.
Setup and Pre-trial Checklist
Check for hazards for dog safety: logs, ditches, cover tangle, briars, glass
___ Test Areas: moves, sequence of tests, split tests
___ Wind Patterns: directions, change potential, forecast
___ Sun Rise; Sun Set; direction of movement, reflections, shadows, backlighting
___ Other Stakes: location, coordination, sound, distance, interference
___ Gallery: location, movement, feeding, distance between stakes/tests
___ Special Needs: boats, float boards, hay, cut brush, mats, waders, white flagging
___ Scenting: cover, ducks vs. pheasants, moisture, irregular terrain
___ Trailing: cover, no. of dogs, moisture, drag back, scenting areas. pre-cutting
___ Alternates: tests, tests areas, time requirements, delays
___ Length of Test: time per dog, mechanics, distance, gun changes
___ Dog Performance: style, courage, nose, intelligence, perseverance, attention
___ Flyer: consistency needed, cover uniformity, effects of wind changes, safety
___ Gunners: EVALUATING BLINDS IN RELATION TO MARKS — PART 2
by Theodore Shih, Denver, Colorado and Dennis R. Voigt
We ended off last issue talking about the purpose, design and initial evaluation of blinds. Now we come to one of the real challenges of judging retrievers, the evaluation of blinds and marks in relation to each other. Each weekend around the country we see great variation in how different judges tackle this topic. Many judges point to particular sections of the rule book to justify their call backs or placement. Another judge, Rule book in hand comes to a different conclusion. So our first issue is “what does the rule book say and what is its intent? Our second issue is the challenge of comparing apples and oranges. The apples are the marks and the oranges are the blinds. It’s not just a case of comparing a rotten apple or orange but also when a particular orange is preferred over a particular apple. Even if apples are worth more than oranges, a good orange may have much more value than a poor apple.
Dennis: Ted, I don’t know if you like apples better than oranges but I do know that many judges rank marks much higher than blinds. They invariably cite the Rule book. Both the AKC and CKC retriever field trial rules have similar wording although not identical.
What the Rule Books say:
AKC-Basic Principles, “Accurate marking is of primary importance. A dog which marks the fall of a bird, uses the wind, follows a strong cripple, and will take direction from his handler is of great value.”
CKC-Basic Principles, “Accurate marking and memory of multiple marks are of primary importance, but a dog which proceeds to the general area of the fall and uses the wind to hunt the bird out in a pleasing manner is of great value. A dog that handles sharply and positively on a mark should be given credit for its performance based on the relative performance of other dogs participating in the test.
A dog that will handle sharply and positively take directions from his handler on a blind retrieve is also of great value.”
When I read this carefully, it says that accuracy is primary but that a good hunt is of great value. Both books say that a dog that takes direction is of great value. The CKC book additionally mentions values such as memory, style (pleasing manner) and sharp and positive handling. This section was amended from a version like the AKC wording. Obviously somebody thought it needed elaboration.
Both AKC and CKC books elaborate later on. The AKC has a supplement whose stated purpose is greater uniformity in conduct and judging of trials. Under evaluation of dog work, it says, “Accurate marking or memory of falls is of paramount importance.” The word paramount is another take on primary and means at the top etc. The book goes on to talk a lot about hunts, area of fall, intelligence, nose, perseverance and style. This is a really important section when evaluating marks and we covered many aspects earlier. This section is followed by the evaluation of abilities acquired through training. It emphasizes that these are of lesser importance in minor stakes but “full refinement” is expected in championship stakes. The importance of control and response to direction is emphasized along with discussion of faults.
The CKC book has these topics not in a supplement but as a separate section of evaluation (16). Although the lay-out is the same and some additional traits (e.g. sagacity) are identified, the wording is essentially the same-use of the word paramount, and the importance of control and response to direction. Both books state that the judges MUST judge the dogs for their abilities acquired through training as well as their natural abilities.
When I study all these sections and compare both books, I conclude that we have to seriously consider blinds in relation to marks. Both are of great value. It also seems that a dog must display good marking as judged by accuracy, good hunts and memory. The reference to primary and paramount suggest marks are a first consideration but it doesn’t necessarily mean that they trump all blind work. I’m inclined to think more that it means that if the dog does not display good marks, he is in real jeopardy for call backs or placing. However, the dilemma remains for the dog that has excellent marks but a weak blind. What do you think?
Comparing Marks and Blinds on game day:
Ted: I have a saying – “Great marks grant a dog grace, not absolution” – that reflects my personal attempt to find a balance between marks and blinds. To honour the premise that “marking is of primary importance,” I will carry a dog that has great marks, but poor blinds. That is the grace great marks obtain.
However, a failure on a blind is a failure. No matter how great the marks, they do not grant absolution from failure. If you fail a blind, no matter how magnificent your marks, you are out of the trial.
As for the issue of not considering blinds at the end of the day, I cannot believe that anyone who has seriously studied the Rule book could conclude that blinds are to have no consequence in awarding placements.
Dennis: I have to agree. I cannot turn a blind eye to a bad blind. But, I do find I have trouble dropping dogs on blinds in comparison to many that I judge with. It has to be a pretty flagrant series of refusals, failure to negotiate hazards or out of control. Only, in such cases, can really good marks can be ignored. I can go along with dropping a dog on a first weak blind only when paired with 2 or 3 weak marks. In other words after 4-5 birds you may not call back a dog who hasn’t failed any one bird but on the other hand hasn’t had any good work either. Quite often such dogs get one more look.
When determining placements one might encounter a situation where one dog had 6 perfect marks, one poor blind and one OK blind. Dog 2 had 4 perfect marks, and two marks with reasonable hunts combined with 2 very good blinds. I’d vote for Dog 2 but I know that some judges would be so impressed with 6 perfect marks they’d ignore the blinds.
I often hear judges declare “marking is of primary importance” when they are determining or defending their placements at the end of the day. Thus, quite weak blinds get ignored when awarding the places. On the one hand, perhaps the biggest frustration I hear from fellow handlers is dismay about being dropped on the land or water blinds after excellent marks earlier. We’ve all run those trials where some specific hazard or a critical location becomes the criteria for callbacks. These two examples, ironically, display opposite philosophies. It seems to me that the evaluation of blinds is a source of great variation in field trials on both sides of the border. I often wonder if we are getting it right. Or have we drifted into an area where it’s hard to evaluate blinds versus marks in the interests of finding the best dogs?
Ted: The more I think about this dilemma, the more I think it may stem from a well intentioned, albeit – in my opinion – misguided effort to eliminate subjectivity from judging.
By this I mean that people want clear, well defined criteria – objective criteria, if you will – by which to judge dog performance. When judging marking, people judge lines because a line provides a clear objective definition of performance. Similarly, when judging blinds, people count whistles and/or look to see whether the dog has navigated one of several key holes successfully. Those criteria are objective, easy to apply, and easy to defend.
If a judge focuses on lines on marks, and whistles and key holes on blinds, then it becomes easier for him to judge – and to defend his decisions – than it is if the judge is to make subjective decisions about the quality of the mark and of the “performance of [a blind] in its entirety.”
My analogy would be that of judging student performance in school. In the public schools, there has been a movement towards assessing student performance on the basis of test scores. Of course, the proponents of standardized testing point to its objectivity. Previously, students were graded individually by teachers. This method was, of course, subjective. Proponents of subjectivity in grading focus on the complexity of evaluating learning and argue that standardized tests do a poor job of measuring a child’s education.
I think we face the same dilemma here. Curiously, I think that the movement in field trial judging has caused contestants to look for keyholes where none may exist. I recently judged in Kentucky. At the end of the trial, I was speaking with Andy Attar, who was one of the contestants whose dogs I judged. Andy described the different parameters that he thought might have defined our land blind. I told him that he had made the land blind far more difficult than I and my co-judge had imagined. But, I think that Andy’s comments reflect the direction that the sport is moving.
Dennis: You may be dead on regarding the quest to eliminate subjectivity. I routinely hear “new” judges complain about the Rule book as not being black and white or clear enough. They seem to want the objective standardized testing kind of book. And yet when I read the Book these days, I find all sorts of guidance, philosophy and details about evaluation. Even though I have a scientist’s background I have no problem with making assessments about the quality of work. That necessitates some subjectivity. That subjectivity doesn’t mean that I can’t be consistent or even objective about the quality of work. In other words I should have no bias about breed, owner or previous knowledge of the dog. In your business as a lawyer, surely even the court justice system also has room for both subjectivity and objectivity.
Ted: In the justice system, laws provide a framework for decision. However, those laws must be applied to a variety of circumstances and then applied to those circumstances through the subjective decisions of judge or jury. In our sport, the Rule Book provides the framework for how a dog’s work should be judged. Then the judges must apply that framework to the actual circumstances of their field trial.
I think that what concerns both of us is the frequency with which people disregard the Rule Book, leaving us with subjectivity which ignores the underlying framework for decision making.
I think that the Rule Book is a marvellous piece of work. Each time I read it I find some new nuance that I had not noticed before. Each time I read it, I have a greater appreciation for the insight and understanding that the original authors had for dog work.
One thing that has become apparent to me as we have progressed in our series of articles, and especially this one, is how much I value the subjectivity that thoughtful judges bring to the process.
For example, the word “style” is used eight (8) times in the Rule Book and “poor style” is classified as a moderate fault. Obviously, style was significant to the writers of the Rule Book. Yet, how does one define style objectively? The answer is that it cannot be done so – it is a subjective judgment. The late Supreme Court Justice Potter Stewart once said of pornography, “I know it when I see it.” I feel the same way about style, “I know it when I see it.”
I think that style should play a larger role in our evaluation of dog work than it seems to play. More to the point of this article, I believe that the “modern” blind with its many tight corridors and key holes penalize the stylish dog and reward the piggy one.
Dennis: I agree with your example of judging style as being one of those subjective but extremely important areas that we should all consider. Unfortunately, such traits see much variation in evaluation from one weekend to the next. Our original problem of apples and oranges probably has raspberries, lemons and bananas to consider also. It seems we have a whole bowl of fruit to consider.
But, even if we just considered apples and oranges, I suspect that we couldn’t paint a simple picture. Granted, we could draw out a dozen performances, discuss each in turn and come to some conclusions. But, I’ve also seen discussions about such diagrams deteriorate into debates on whether Mark A is a good mark an OK mark or a weak mark. Similarly some will argue about how good Blind B is. I think that’s a totally different topic than how to compare marks against blinds. The important discussion for this article is that if we agree on the scoring and value of the marks and also on each of the blinds, how do we evaluate and compare them together to determine call-backs and placements.
Let’s assume that as co-judges, you and I can agree on the scoring of individual marks or blinds. We agree on what is good and what is bad. Now when it comes to evaluating overall performance and ranking dogs, what factors do we consider? How important is difficulty of a particular marking set-up versus a blind? How will we rank cast refusals on a blind with marking weaknesses such as a big hunt on the wrong side or a grossly cheaty line to a mark. How do we rate a handle on a mark with a dog that missed a hazard on a blind? We need to agree on this for both call backs and placements.
During call backs I want to look at both the apples and oranges as well any other fruit. But you and I don’t have rank the dogs and their marks versus blinds. All I want to do is look at the overall performance to confirm that we should keep the dog. I don’t care about bruised fruit only rotten fruit! I think that there are quite a few judges that look at recent work and try to find reasons to drop a dog during call backs. They don’t seem to have trouble dropping dogs after blinds. This is quite a different philosophy perhaps because during call backs they don’t look enough at overall performances to find reasons to keep a dog. Then, at the end of the day they declare that `marking is of primary importance and the best marks win. Others say that because you have 2 blind retrieves and 6-8 marked retrieves you are automatically putting much more emphasis on marks to make it primary.
My personal view is that I think at the end of the day I should strive to evaluate each retrieve and its difficulty in relation to each other (blinds and marks) and each other dog. I do think performance on marks should be directly compared with performance on blinds. I want to see the all around dog-one who can mark but also line and handle. Incidentally, I find it interesting when you do this how seldom the line to the mark is important.
Ted: I guess I am really fixated on the subjective aspect of judging at the moment. First, when evaluating the significance of blinds to marks in rewarding placements as a general matter, I would also agree that I could not answer without knowing the relative difficulty of the specific tests. For example, if for one reason or another, my marking tests (or specific marks within my land and water marking tests) were easy and many of the dogs smoked the marks, while my land blind or water blind chewed them up, I would emphasize the blind more in evaluating placements. My answer would depend on the specific circumstances.
Since I am fresh from a recent judging assignment, I offer some examples as to how I approach judging marks and blinds to find the best overall performance.
The first series was a wide open triple with two retired guns. The field was fabulous, our holding blinds were well concealed, and the wind remained constant all day. The test was very difficult. Of the 62 starters, 24 picked up or handled. Another 8 dogs either hunted two birds hard, or simply had no idea where one of the birds was, and stumbled on it. We brought back 32 dogs to land blind. Of those 32, maybe 5-8 had excellent work on all three birds. The general consensus was that callbacks were generous.
Our land blind was of average difficulty. Five dogs picked up. A few dogs did excellent work. A few were hacky. Most dogs did okay work. We brought everyone who did not pickup back. Why? Because, other than the pickups, there were no failures, and because there really wasn’t enough separation to justify dropping the remaining dogs. So we brought 27 dogs back to water blind.
Our water blind was of above average difficulty, but not what I would call particularly hard, because the wind was at our backs, and not cross wind towards land as it was when we set up. I think five dogs picked up. Those dogs failed. Using your analogy, the fruit was rotten. Two dogs avoided the hazards of the test, by swimming way wide of a heavily scented shoreline where we had sluiced the water. In our opinion, those dogs had also failed because they had made no effort to perform our test, and demonstrate the dogs’ ability to take a line and/or willingness to accept directions from the handler in the face. Again, the fruit was rotten. Seven dogs with mediocre marks also had mediocre water blinds. Using your analogy, their fruit was all bruised, with nothing to recommend itself for future evaluation. A few dogs had mediocre water blinds (bruised fruit), but good land marks on a very hard test (perfectly appearing and tasting fruit). So we came back with fifteen dogs to the water marks.
On the water marks, we had a triple of above average difficulty. Two dogs picked up. One dog handled. Three or four dogs had big rangy hunts on the long retired bird. Although there were slight differences, the remaining dogs had very similar work. In working out our placements, we looked first at our marks. Because there was little separation in the dogs on the water marks, and because our land marks were so hard, we focused on the land marks.
There were four dogs that had very good land marks. We then looked at our land blind and our water blind, placing more emphasis on the latter, because it was harder. When we went through that process we had two dogs at the top of the heap. The winning dog had slightly better land marks, equivalent water marks, and superior blinds on both land and water. In my mind, the dog’s overall performance on both marks and blinds earned it the win.
Dennis: Here’s my summary then of picking out which fruit bowl dog is the best. Each one of the fruits has value, be they surrogates for marks, blinds, style or obedience. Evaluation of both taste and appearance may vary among judges but be consistent. Subjective as well as objective evaluation may be necessary. Rotten fruit can ruin the whole bowl. You cannot keep good apples and ignore rotten oranges. Rotten fruit has to go no matter what it is. But don’t prematurely throw away the whole fruit bowl just become some fruit you don’t care for is slightly bruised. If some bowl contains fruits that are much harder to obtain, it should be given high value in comparison to others. You should always look for the best overall fruit bowl, not just the shiniest apples.
Maybe next issue we should talk not just about the fruit but about the menu-that is, the Rule book. Where can it be improved? Is the philosophy of recent changes and recommendations taking us down the road to better judging?
retired guns, location, visibility, scenting, movement, wrapping/camo, gun stands
___ Background: traffic interference, terrain, shadows/darkness, cars, cattle
___ Bird Planter: location, wind, visibility, trailing, time to plant, bird sacs
___ Marking Blind: natural markers, paint, ribbon, depth, lighting
___ Honour: when, where
___ Blind Balance: obstacles and traps beginning, middle, end, control needed, hazard room
___ Handler Instructions: line, movement, room behind blind, ditches
___ What type of dog will win this trial?
___ What if? Fog, rain, excessive heat
___ Bird count vs. number of dogs, freshness, typeBack to Top
We still have a selection of the valuable Back Issues of Retrievers ONLINE
To find out which issues are still available and costs click here
Currently we have a website launch sale and discounts available.
If you have never seen an issue of Retrievers ONLINE, you can view a complete issue here.