Details
I'm reading Shin's Maximum segment sum is back: deriving algorithms for two segment problems with bounded lengths and trying to work out the details as exercises. So far I've just gone through 3/4 of the first page deriving mssu
.
The first claim we need is
sum . fitlen = { split cancellation (backwards) } fst . <sum, len> . fitlen = { claim: <R, S> . C = <R, S . C> } fst . <sum, len . fitlen> = { len . fitlen = (≤ U) . len } fst . <sum, (≤ U) . len> = { product relator } fst . (id × (≤ U)) . <sum, len> = { definition of fit } fst . fit . <sum, len>The subclaim follows from
R ∩ (S . C) = (R ∩ S) . C
where C
is coreflexive, since <R, S> = (outlº . R) ∩ (outrº . S)
.
R ∩ (S . C) ⊆ { modular law } ((R . Cº) ∩ S) . C ⊆ { C coreflexive } (R ∩ S) . C = { C simple } (R . C) ∩ (S . C) ⊆ { C coreflexive } R ∩ (S . C)
Next we have to invent R
such that the fusion condition
fit . <sum, len> . (const [] ∪ cons) = R . (id × (fit . <sum, len>))holds. We reason:
fit . <sum, len> . (const [] ∪ cons) = { composition distributes over join } (fit . <sum, len> . const []) ∪ (fit . <sum, len> . cons) = { naturality of const (??) } const (0, 0) ∪ (fit . <sum, len> . cons) = { <sum, len> is a fold by banana-split law (??); fold evaluation } const (0, 0) ∪ (fit . add . (id × <sum, len>)) = { claim: fit . add = fit . add . (id × fit) } const (0, 0) ∪ (fit . add . (id × (fit . <sum, len>))) = { naturality of const (??); composition distributes over join } (const (0, 0) ∪ (fit . add)) . (id × (fit . <sum, len>))Therefore we have
R = const (0, 0) ∪ (fit . add)
. The subclaim holds since 1 + l ≤ U
implies l ≤ U
.
And now we reach the monotonicity condition. There is a somewhat irrelevant component left out in the antecedent given in the paper:
s₁ ≤ s₂ ∧ (s₃ = 0 ∨ (s₃ = a + s₁ ∧ 1 + l₁ ≤ U))
But 1 + l₁ ≤ U
doesn't help ensuring that l₄ = 1 + l₂ ≤ U
, since the two lengths are unrelated. Thus one can enhance the antecedent:
s₁ ≤ s₂ ∧ l₁ ≥ l₂ ∧ (s₃ = 0 ∨ (s₃ = a + s₁ ∧ 1 + l₁ ≤ U))and we'll have
1 + l₂ ≤ 1 + l₁ ≤ U
.
Finally, the proof of the thinning theorem is structurally equivalent to that of the greedy theorem.
It reveals that I haven't understood datatypes (products, coproducts, ...) and their properties in Rel. For example I am not certain whether naturality (in particular "theorems for free") can be applied as one would do in Fun. (And I don't understand theorems for free, either.)
--
I'll try to finish the rest in the evening...
Labels: Program Derivation
<< 回到主頁